added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2018-04-03T03:10:12.108Z
2008-10-17T00:00:00.000
950578
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/283/42/28660.full.pdf", "pdf_hash": "75b4f17ec89ce6352bc0907ebb184e3d209ff4c3", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41613", "s2fieldsofstudy": [ "Biology" ], "sha1": "fff963081d12d803c2891e631bd12983b451c0e4", "year": 2008 }
pes2o/s2orc
Increased Dosage of Dyrk1A Alters Alternative Splicing Factor (ASF)-regulated Alternative Splicing of Tau in Down Syndrome*♦ Two groups of tau, 3R- and 4R-tau, are generated by alternative splicing of tau exon 10. Normal adult human brain expresses equal levels of them. Disruption of the physiological balance is a common feature of several tauopathies. Very early in their life, individuals with Down syndrome (DS) develop Alzheimer-type tau pathology, the molecular basis for which is not fully understood. Here, we demonstrate that Dyrk1A, a kinase encoded by a gene in the DS critical region, phosphorylates alternative splicing factor (ASF) at Ser-227, Ser-234, and Ser-238, driving it into nuclear speckles and preventing it from facilitating tau exon 10 inclusion. The increased dosage of Dyrk1A in DS brain due to trisomy of chromosome 21 correlates to an increase in 3R-tau level, which on abnormal hyperphosphorylation and aggregation of tau results in neurofibrillary degeneration. Imbalance of 3R- and 4R-tau in DS brain by Dyrk1A-induced dysregulation of alternative splicing factor-mediated alternative splicing of tau exon 10 represents a novel mechanism of neurofibrillary degeneration and may help explain early onset tauopathy in individuals with DS. The microtubule-associated protein tau plays an important role in the polymerization and stabilization of neuronal microtubules. Tau is thus crucial to both the maintenance of the neuronal cytoskeleton and the maintenance of the axonal transport. Abnormal hyperphosphorylation and accumulation of this protein into neurofibrillary tangles (NFTs) 2 in neurons, first discovered in Alzheimer disease (AD) brain (1,2), is now known to be a characteristic of several related neurodegenerative disorders called tauopathies (3). Several different etiopathogenic mechanisms lead to development of NFTs (4). Adult human brain expresses six isoforms of tau from a single gene by alternative splicing of its pre-mRNA (5,6). Inclusion or exclusion of exon 10 (E10), which codes for the second microtubule-binding repeat, divides tau isoforms into two main groups, three (3R)-or four (4R)-microtubule-binding repeat tau. They show key differences in their interactions with tau kinases as well as their biological function in the polymerization and stabilization of neuronal microtubules. In the adult human brain, 3R-tau and 4R-tau are expressed at similar levels (5,7). Several specific mutations in the tau gene associated with frontotemporal dementias with Parkinsonism linked to chromosome 17 (FTDP-17) cause dysregulation of tau E10 splicing, leading to a selective increase in either 3R-tau or 4R-tau. It has therefore been suggested that equal levels of 3R-tau and 4R-tau may be critical for maintaining optimal neuronal physiology (8). Down syndrome (DS), caused by partial or complete trisomy of chromosome 21, is the most common chromosomal disorder and one of the leading causes of mental retardation in humans. Individuals with DS develop Alzheimer-type neurofibrillary degeneration as early as the fourth decade of life (9). The presence of Alzheimer-type amyloid pathology in DS is attributed to an extra copy of APP gene. However, the molecular basis of neurofibrillary pathology remains elusive. Alternative splicing of tau E10 is tightly regulated by complex interactions of splicing factors with cis-elements located mainly at E10 and intron 10. Serine/arginine-rich (SR) proteins are a superfamily of conserved splicing factors in metazoans and play very important roles in alternative splicing (10). One such protein, splicing factor 2, also called alternative splicing factor (ASF), plays essential and regulatory roles in alternative splicing of E10 by binding to a polypurine enhancer of exonic splicing enhancer located at tau E10 (11,12). The function and subcellular localization of ASF is tightly regulated by phosphorylation (13)(14)(15)(16). Dyrk1A (dual-specificity tyrosine phosphorylation-regulated kinase 1A) lies at the Down syndrome critical region of chromosome 21 and contributes to several phenotypes of DS in transgenic mice (17,18). Multiple biological functions of Dyrk1A are suggested by its interaction with a myriad of cellular proteins including transcription and splicing factors (19). It is distributed throughout the nucleoplasm with a predominant accumulation in nuclear speckles (20,21), the storage site of inactivated SR proteins, including ASF. Because of its overexpression in DS brain and its predominant localization in nuclear speckles, we hypothesized that Dyrk1A could affect phosphorylation of ASF, and in doing so, disturb ASF-regulated alternative splicing of tau E10, leading to the apparent dysregulation of the balance of 3R-tau and 4R-tau. In the present study, we provide direct evidence that Dyrk1A can phosphorylate ASF at Ser-227, Ser-234, and Ser-238, driving it into nuclear speckles. By preventing its association with nascent transcripts, phosphorylation of ASF by Dyrk1A causes exclusion of tau E10, leading to an increase in 3R-tau level and an imbalance of 3R-tau and 4R-tau in DS brain. Dysregulation of alternative splicing of tau E10 represents a novel mechanism of neurofibrillary degeneration in DS and offers a unique therapeutic target. EXPERIMENTAL PROCEDURES Human Brain Tissue-Tissue from the temporal cortices of six DS and six control brains (see Table 1) for biochemical studies was obtained from the Brain Bank for Developmental Disabilities and Aging of the New York State Institute for Basic Research in Developmental Disabilities. Diagnosis of all human cases was genetically and histopathologically confirmed, and the brain tissue samples were stored at Ϫ70°C until used. Plasmid Construction and DNA Mutagenesis-pGEX-2T-ASF was constructed by PCR amplification from pCEP4-ASF and subcloning into pGEX-2T to express GST-ASF fusion protein. Mutation of Ser-227 to Ala of ASF was achieved by using the QuikChange II site-directed mutagenesis kit (Stratagene, La Jolla, CA) with primers (forward, 5Ј-caggagtcgcagttacgccccaaggagaagcagag-3Ј, and reverse, 5Ј-ctctgcttctccttggggcgtaactgcgactcctg-3Ј). The mutation of Ser-234 or Ser-238 to Ala was generated by using PCR from vector pGEX-2T-ASF with the same forward primer as described above and different reverse primers (5Ј-cggaattcttatgtacgagagcgagatctgctatgacggggagaatagcgtggtgctcctctgc-3Ј for Ser-234 to Ala and 5Ј-cggaattcttatgtacgagagcgagatctgctatgacggggagcatagc-3Ј for Ser-238 to Ala). The mutations at Ser-227, Ser-234, and Ser-238 to three Ala (ASF S3A ) were produced by using PCR to pGEX-2T-ASF S227A with reverse primer (5Ј-cggaattcttatgtacgagagcgagatctgctatgacggggagcatagcgtggtgctcctctgc-3Ј). All the PCR products were digested and inserted into pGEX-2T. The mutations were confirmed by DNA sequencing analysis. For mammalian vectors, ASF mutants were constructed by digestion of those ASF mutants in pGEX-2T vector and inserting them into pCEP4 to generate pCEP4-ASF S3A. By using a similar strategy, we also mutated these three Ser residues, 227, 234, and 238, to aspartic acid to generate plasmid pCEP4-ASF S3D . Cell Culture and Transfection-COS-7, HEK-293T, SH-SY5Y, and HeLa cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum (Invitrogen) at 37°C. Normal human neuronal progenitor cells (Lonza, Walkersville, MD) were maintained in Neurobasal supplemented with 2% B27 (Invitrogen), 20 ng/ml fibroblast growth factor 2 (FGF-2), 20 ng/ml epidermal growth factor, and 10 ng/ml leukemia inhibitory factor and differentiated with 10 M retinoid acid in the maintenance medium for 6 days. All transfections were performed in triplicate with Lipofectamine 2000 (Invitrogen) in 12-well plates. The cells were transfected with 2.4 g of plasmid DNA and 5 l of Lipofectamine 2000 in 1 ml Opti-MEM (Invitrogen) for 5 h at 37°C (5% CO 2 ), after which 1 ml Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum was added. For co-expression experiments, 2.4 g of total plasmid was used containing 0.8 g of E10 splicing vector, 0.8 g of ASF vectors, and 0.8 g of Dyrk1A vector or their control vectors. In Vitro Phosphorylation of ASF by Dyrk1A-For in vitro ASF phosphorylation by Dyrk1A, GST-ASF or GST (0.2 mg/ml) was incubated with various concentrations of Dyrk1A in a reaction buffer consisting of 50 mM Tris-HCl (pH 7.4), 10 mM ␤-mercaptoethanol, 0.1 mM EGTA, 10 mM MgCl 2 , and 0.2 mM [␥-32 P]ATP (500 cpm/pmol). After incubation at 30°C for 30 min, the reaction was stopped by adding an equal volume of 2ϫ Laemmli sample buffer and boiling. The reaction products were separated by SDS-PAGE. Incorporation of 32 P was detected by exposure of the dried gel to phosphorimaging system (BAS-1500, Fujifilm). GST Pull Down-GST or GST-ASF was purified by affinity purification with glutathione-Sepharose as described above but without elution from the beads. These beads coupled GST or GST-ASF were incubated with crude extract from rat brain homogenate in buffer (50 mM Tris-HCl, pH 7.4, 8.5% sucrose, 50 mM NaF, 1 mM Na 3 VO 4 , 0.1% Triton X-100, 2 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 10 g/ml leupeptin, and 10 g/ml pepstatin). After a 4-h incubation at 4°C, the beads were washed with washing buffer (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, and 1 mM dithiothreitol) six times, the bound proteins were eluted by boiling in Laemmli sample buffer, and the samples were subjected to Western blot analysis. Co-immunoprecipitation-HEK-293T cells were co-transfected with pCEP4-ASF-HA and pcDNA3-Dyrk1A for 48 h as described above. The cells were washed twice with phosphatebuffered saline (PBS) and lysed by sonication in lysate buffer (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 50 mM NaF, 1 mM Na 3 VO 4 , 2 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, 2 g/ml aprotinin, 2 g/ml leupeptin, and 2 g/ml pepstatin). Insoluble materials were removed by centrifugation; the supernatants were preabsorbed with protein G-conjugated agarose beads and incubated with anti-HA or anti-Dyrk1A antibody 8D9 overnight at 4°C, and then protein G beads were added. After a 4-h incubation at 4°C, the beads were washed with lysate buffer twice and with Tris-buffered saline twice, and bound proteins were eluted by boiling in Laemmli sample buffer. The samples were subjected to Western blot analysis with the indicated primary antibodies. Co-localization Study-HeLa cells were plated in 12-well plates onto coverslips 1 day prior to transfection at 50 -60% confluence. HA-tagged ASF constructs were singly transfected or co-transfected with Dyrk1A or Dyrk1A k188R constructs as described above. Two days after transfection, the cells were washed with PBS and fixed with 4% paraformaldehyde in PBS for 30 min at room temperature. After washing with PBS, the cells were blocked with 10% goat serum in 0.2% Triton X-100/ PBS for 2 h at 37°C and incubated with rabbit anti-HA antibody (1:200) and mouse anti-Dyrk1A (8D9, 1:10,000) overnight at 4°C. After washing and incubation with secondary antibodies (TRITC-conjugated goat anti-rabbit IgG and FITC-conjugated goat anti-mouse IgG, 1:200), the cells were washed extensively with PBS and incubated with 5 g/ml Hoechst 33342 for 15 min at room temperature. The cells were washed with PBS, mounted with Fluoromount-G, and revealed with a Leica TCS-SP2 laser-scanning confocal microscope. Quantitation of Tau E10 Splicing by Reverse Transcription-PCR (RT-PCR)-Total cellular RNA was isolated from cultured cells by using an RNeasy mini kit (Qiagen GmbH). One microgram of total RNA was used for first-strand cDNA synthesis with oligo(dT) 15-18 by using an Omniscript reverse transcription kit (Qiagen GmbH). PCR was performed by using Prime-STAR TM HS DNA Polymerase (Takara Bio Inc., Otsu, Shiga, Japan) with primers (forward 5Ј-GGTGTCCACTCCCAGTT-CAA-3Ј and reverse 5Ј-CCCTGGTTTATGATGGATGTTG-CCTAATGAG-3Ј) to measure alternative splicing of tau E10 under conditions: at 98°C for 3 min, at 98°C for 10 s, and at 68°C for 40 s for 30 cycles and then at 68°C for 10 min for extension. The PCR products were resolved on 1.5% agarose gels and quantitated using the Molecular Imager system (Bio-Rad). Immunohistochemical Staining-The serial sections from the temporal inferior gyrus of six cases of DS and six control cases (see Table 1) were examined. For comparison, brain sections from the same region of three AD cases with a moderately severe stage of the disease (global deterioration scale, stage 6) and three cases with a severe stage of the disease (global deterioration scale, stage 7) were examined by immunohistochemical staining. One brain hemisphere was fixed in 10% buffered formalin, dehydrated in ethanol, and infiltrated and embedded in polyethylene glycol. The tissue blocks were cut serially into 50-m-thick sections. Monoclonal antibody RD3 (diluted 1:500) was used for detection of 3R-tau, and RD4 (diluted 1:200) was used to detect 4R-tau. The endogenous peroxidase in the sections was blocked with 0.2% hydrogen peroxide in methanol. The sections were then treated with 10% fetal bovine serum in PBS for 30 min to block nonspecific binding. The antibodies were diluted in 10% fetal bovine serum in PBS and were incubated with sections overnight at 4°C. The sections were washed and treated for 30 min with biotinylated sheep anti-mouse IgG antibody diluted 1:200. The sections were treated with an extravidin peroxidase conjugate (1:200) for 1 h, and the product of reaction was visualized with diaminobenzidine (0.5 mg/ml with 1.5% hydrogen peroxide in PBS). After immunostaining, the sections were lightly counterstained with hematoxylin. Sarkosyl-insoluble Tau Preparation-Homogenates of control and DS brains prepared in 10 volumes of buffer (50 mM Tris-HCl, 8.5% sucrose, 50 mM NaF, 10 mM ␤-mercaptoethanol, 2 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, 50 nM okadaic acid, and 10 g/ml aprotinin, leupeptin, and pepstatin) were centrifuged at 16,000 ϫ g for 10 min. The supernatant was adjusted to 1% N-lauroylsarcosine and 1% ␤-mercaptoethanol and incubated for 1 h at room temperature. After the incubation, supernatant was spun at 100,000 ϫ g for 1 h at 25°C. The resulting pellet was dissolved in Laemmli sample buffer and subjected to Western blot analysis. Mass Spectrometry-GST-ASF fusion protein was phosphorylated by Dyrk1A as described above. To maximize the yield of the phosphorylated protein, the reaction was carried out for 1 h with high amounts of the kinase (1:6 molar ratio of GST-ASF and GST-Dyrk1A). The phosphorylated products were separated in SDS-PAGE and stained by Coomassie Blue. The gel piece was in-gel tryptic-digested. Proteolytic peptides were extracted from the gel followed by TiO2-based immobilized metal affinity chelate enrichment for the phosphopeptide. The resulting fraction was concentrated and reconstituted in 10 l of 5% formic acid for liquid chromatography-tandem mass spectrometry analysis separately. Statistical Analysis-Where appropriate, the data are presented as the means Ϯ S.D. Data points were compared by the unpaired two-tailed Student's t test, and the calculated p values are indicated in Figs. 4 -6. For the analysis of the correlation between Dyrk1A level and 3R-tau or 4R-tau levels in human brain homogenates, the Pearson product-moment correlation coefficient r was calculated. RESULTS ASF Promotes Tau E10 Inclusion-3R-tau and 4R-tau are generated by alternative splicing of tau E10, an event known to be regulated by a host of SR family splicing factors, including ASF, 9G8, SC35, and SRp55 (11, 12, 26 -28). To compare the E10 splicing efficiency of these SR proteins, we co-transfected mini-tau gene pCI-I9/LI10, consisting of tau exons 9 -11, part of intron 9 (SI9), and the full length of intron 10 (LI10) (23), together with each of the four known E10 splicing factors, into HEK-293T cells and detected E10 splicing by RT-PCR. We found that of the four SR proteins, ASF promoted E10 inclusion most effectively (Fig. 1a). This effect was not cell type-specific, as overexpression of ASF in cells including HEK-293T, COS7, HeLa, and SH-SY5Y, also up-regulated 4R-tau production (Fig. 1b). To establish the requirement of ASF in E10 inclusion in vivo, we next knocked down endogenous ASF by transfecting HEK-293T cells with ASF-specific siRNA (Fig. 1c). Decreased ASF expression by siRNA significantly decreased 4R-tau production FIGURE 1. Overexpression of ASF promotes tau exon 10 inclusion. a, overexpression of SR proteins affected tau E10 splicing. the pCI-SI9/LI10 mini-tau-gene was co-transfected with the same amount of pCEP4-SR proteins into HEK239-T cells. Total RNA was subjected to RT-PCR for measurement of tau exon 10 splicing after a 48-h transfection. b, expression of ASF promoted tau exon 10 inclusion. pCI-SI9/LI10 was co-transfected with pCEP4-ASF into various cell lines indicated under each panel. Tau exon 10 splicing was measured by RT-PCR after a 48-h transfection. c, knock-down of ASF by siRNA inhibited tau exon 10 inclusion. pCI-SI9/LI10 was co-transfected with ASF siRNA or scramble siRNA into HEK-293T cells. The expression of ASF was measured by Western blot (upper panel), and the tau exon 10 splicing was measured by RT-PCR after a 48-h transfection (lower panel). Con, control. The last lane is Dyrk1A alone, without GST-ASF. After drying the gel, the 32 P incorporated into ASF was measured by using a phosphorimaging device (BAS-1500, Fuji) (upper panel). The level of 32 P was normalized by the protein level detected by Coomassie Blue staining was plotted to Dyrk1A concentration (lower panel). c, inhibition of Dyrk1A by EGCG decreased the phosphorylation of ASF in cultured cells. COS7 cells were transfected with pCEP4-ASF-HA for 45 h and then treated with 10 M Tg003 and 50 M EGCG to inhibit Clk and Dyrk1A, respectively. At the same time, [ 32 P]orthophosphate was added to label the phosphoproteins. After a 3-h treatment and phospholabeling, the cells were harvested, and the cell lysates were subjected to immunoprecipitation with anti-HA. The immunoprecipitated ASF-HA was analyzed by autoradiography and Western blot with anti-HA. The 32 P incorporated into ASF-HA was normalized by ASF-HA level detected with anti-HA. Con, control; Pi, phosphate group. d, mutation of Ser to Ala at Ser-227, Ser-234, and/or Ser-238 inhibited ASF phosphorylation by Dyrk1A. Mutants of GST-ASF at the Ser residue indicated above each lane were phosphorylated by Dyrk1A (10 g/ml) in vitro for 30 min. 32 P incorporation into GST-ASFs was measured by using phosphorimaging analysis after the phosphorylation products were separated by SDS-PAGE. The 32 P incorporated was normalized by protein level detected with Coomassie Blue staining. WT, wild type. OCTOBER 17, 2008 • VOLUME 283 • NUMBER 42 ( Fig. 1c), further supporting an essential role of ASF in tau E10 inclusion. Dyrk1A Phosphorylates ASF Dyrk1A Phosphorylates ASF at Ser-227, Ser-234, and Ser-238-The biological activity of ASF is regulated by its phosphorylation (13)(14)(15). To study whether Dyrk1A phosphorylated ASF, we phosphorylated GST-ASF in vitro by Dyrk1A. We found that GST-ASF (Fig. 2b), but not GST (Fig. 2d), was phosphorylated by Dyrk1A in an enzyme concentration-dependent manner (Fig. 2b). To study the phosphorylation of ASF by Dyrk1A in vivo, we overexpressed HA-ASF in COS7 cells and labeled the cells with [ 32 P]orthophosphate. ASF is phosphorylated heavily in vivo. To better learn the effect of inhibition of Dyrk1A on ASF phosphorylation, we lowered down the phosphorylation of ASF by Tg003, a Clk/Sty inhibitor, and then added Dyrk1A inhibitor EGCG. We found that Clk/Sty inhibitor significantly decreased 32 P incorporation into ASF and increased ASF mobility shift (Fig. 2c). When compared with Tg003 treatment, EGCG further inhibited the 32 P incorporation. However, EGCG did not further change the mobility shift (Fig. 2c). These results indicate that ASF is phosphorylated by Dyrk1A at different sites from that phosphorylated by Clk/Sty in cultured cells. To map the putative phosphorylation sites on ASF, we in vitro phosphorylated GST-ASF using a high concentration of Dyrk1A (130 g/ml, an enzyme/substrate molar ratio of ϳ1/6) for 60 min followed by SDS-PAGE separation of the phosphoproducts. A slower mobility of phospho-GST-ASF was evident by Coomassie Blue staining (data not shown), indicating that ASF was phosphorylated by Dyrk1A. Phospho-GST-ASF was then subjected to mass spectrometry after in-gel trypsin digestion. Surprisingly, no phosphorylated peptides were detected (data not shown). Peptide recovery data from mass spectrometry showed the absence of C-terminal 40 amino acid residues, including part of RS1 and all RS2 (Fig. 2a and supplemental Fig. 1). This finding suggested that the phosphorylation sites were probably located within this region. Dyrk1A is a proline-arginine-directed Ser/Thr protein kinase and prefers an RX(X)(S/T)P motif. ASF contains three such motifs, all of them within 22 amino acid residues of the C-terminal RS2 domain (Fig. 2a). To determine whether Dyrk1A-mediated phosphorylation of ASF occurred within the consensus regions, we mutated three Ser residues within the motifs to Ala, either individually or in combination followed by in vitro phosphorylation by Dyrk1A. We found that any given single mutation decreased ASF phosphorylation, which was further compounded by double or triple mutations (Fig. 2d, lowest panel). This finding suggested that Dyrk1A phosphorylated ASF mainly at Ser-227, Ser-234, and Ser-238 in vitro. ASF Interacts with Dyrk1A-The in vitro phosphorylation of ASF by Dyrk1A led us to investigate whether the two proteins interact with each other in vitro and in vivo. Employing GSTpulldown assay and immunoprecipitation studies, we found that only GST-ASF, but not GST, could pull down Dyrk1A from rat brain extract (Fig. 3a), and Dyrk1A and ASF could be coimmunoprecipitated by antibodies to each protein (Fig. 3b). These results confirmed the interaction between ASF and Dyrk1A in vitro. To study their interaction in vivo, we co-expressed HA-tagged ASF (HA-ASF) and Dyrk1A in HeLa cells and established their subcellular localization by using confocal microscopy. Both ASF and Dyrk1A were co-localized in the nucleus and enriched in speckles (Fig. 3c), giving further evidence to their possible interaction in cultured cells. Dyrk1A Inhibits Tau E10 Inclusion Induced by ASF-To study whether Dyrk1A affects the biological activity of ASF, we determined ASF-mediated tau E10 splicing by overexpressing both Dyrk1A and ASF in COS 7 cells. Splicing products of 3Rand 4R-tau were quantitated by RT-PCR. We found that Dyrk1A significantly inhibited ASF-mediated tau E10 inclusion, and co-expression of ASF with kinase-dead Dyrk1A (Dyrk1A K188R ) had no effect on tau E10 splicing (Fig. 4a). Furthermore, knockdown of endogenous Dyrk1A with siRNA (Fig. 4b, inset) increased 4R-tau (Fig. 4b), confirming that ASF-mediated tau E10 splicing is regulated by Dyrk1A. Differentiated human neuronal progenitor cells with retinoid acid express both 3R-tau and 4R-tau. In these cells, inhibition of Dyrk1A with either harmine or EGCG elevated 4R-tau expression (Fig. 4c, upper panel) and resulted in the increase in the ratio of 4R-tau to 3R-tau (Fig. 4c, lower panel), indicating that Dyrk1A regulates endogenous tau exon 10 splicing. ASF shuttles between the cytoplasm and the nucleus, as well as within the nucleus, a process dependent on its state of phosphorylation. To determine whether Dyrk1A-induced phosphorylation of ASF also affected its subcellular localization, we overexpressed ASF alone or in combination with either Dyrk1A or kinase-dead Dyrk1A K188R in HeLa cells. Employing laser confocal microscopy, ASF was found to be localized primarily in the nuclear FIGURE 3. ASF interacts with Dyrk1A. a, Dyrk1A was pulled down from rat brain extract by GST-ASF. GST-ASF or GST coupled onto glutathione-Sepharose was incubated with rat brain extract. After washing, bound proteins were subjected to Western blots by using anti-GST, anti-SR protein, and anti-Dyrk1A. Only GST-ASF, but not GST, pulled down Dyrk1A. All these lanes are from the same blot from which unrelated lanes between the second and third lanes were removed. b, ASF and Dyrk1A could be co-immunoprecipitated by each other's antibodies. ASF tagged with HA and Dyrk1A were co-expressed in HEK-293T cells for 48 h. The cell extract was incubated with anti-HA or anti-Dyrk1A, and then protein G beads were added into the mixture. The bound proteins were subjected to Western blots by using antibodies indicated at the right of each blot. Dyrk1A and HA-ASF were co-immunoprecipitated by each other's antibodies, respectively. No Ab, no antibody; IP, immunoprecipitation. c, co-localization of ASF with Dyrk1A in nucleus. HA-ASF and Dyrk1A were co-transfected into HeLa cells. After a 48-h transfection, the cells were fixed and immunostained by anti-HA or anti-Dyrk1A and followed by TRITC-anti-rabbit IgG or FITC-anti-mouse IgG. Hoechst was used for nucleu staining. Dyrk1A Phosphorylates ASF periphery of ASF-only transfected cells (Fig. 4d). When co-expressed with Dyrk1A, ASF translocated into nuclear speckles and co-localized with Dyrk1A (Figs. 3c and 4d), whereas co-expression with Dyrk1A K188R had no effect on ASF localization (Fig. 4d). Together, these results suggest that Dyrk1A drives ASF from the nuclear periphery into speckles, preventing the association of ASF with nascent transcripts and resulting in an increase in 3R-tau production. ASF promoted tau exon 10 splicing in all tested cell lines (Fig. 1b). However, the promoting effect of ASF on tau exon 10 splicing was not to the same extent in different experiments by using the same cell line (Figs. 1, a and c and 4, a and b), suggesting that basal activities of endogenous splicing factors or their regulators are different in cells with different conditions, which was further confirmed by the study that HEK-293T cells with different passage showed different tau splicing level (supplemental Fig. 2). ASF Pseudophosphorylated at Ser-227, Ser-234, and Ser-238 Mainly Localizes in Speckles and Does Not Promote Tau E10 Inclusion-To evaluate the central role of Ser-227, Ser-234, and Ser-238 in conferring biological activity to ASF, we mutated these sites to either alanine (ASF S3A ) or aspartic acid (ASF S3D ) and studied the localization and biological activity of ASF in transiently transfected HeLa cells. We observed that ASF S3A was mainly localized in the nuclear periphery (Fig. 5a), and when co-overexpressed with Dyrk1A, it failed to translocate to the nuclear speckles (Fig. 5b). In contrast, ASF S3D , in which serine residues at the three sites were mutated to aspartate to mimic phosphorylation (pseudophosphorylation), was enriched in speckles (Fig. 5a), a situation reminiscent of the subnuclear localization of ASF upon phosphorylation by Dyrk1A (Figs. 3c and 4d). We further studied whether phosphorylation-resistant ASF S3A promoted tau E10 inclusion in transfected cells. As expected, ASFS3A promoted tau E10 inclusion, whereas pseudophosphorylated ASF S3D did not change the tau E10 splicing pattern in tau transcripts (Fig. 5c). Together, these results strongly suggest Mini-tau gene pCI-SI9/LI10 was co-transfected with ASF or Dyrk1A into COS7 cells for 48 h, and the total RNA was extracted and subjected for measurement of tau exon 10 splicing by using RT-PCR. Con, control. b, knock down of Dyrk1A by its siRNA elevated exon 10 inclusion induced by ASF. Mini-tau gene was co-transfected into HEK-293T cells with ASF and siRNA of Dyrk1A or its scrambled form, and then tau exon 10 splicing was analyzed by RT-PCR after a 48-h transfection. Expression of endogenous Dyrk1A was decreased by siRNA of Dyrk1A dose-dependently (inset panel). Transfection of Dyrk1A siRNA significantly increased the 4R-tau expression when compared with the scrambled form. c, inhibition of Dyrk1A increased 4R-tau expression in differentiated human neuronal progenitor cells. Human neuronal progenitor cells were differentiated with retinoid acid for 6 days and then treated with 12.5 M EGCG or 10 M Harmine for 24 h to inhibit Dyrk1A. The cell lysates were objected to Western blots with anti-3R-tau and anti-4R-tau antibodies. The ratio of 4R-tau and 3R-tau was calculated. The data are presented as mean Ϯ S.D. d, Dyrk1A drove ASF into speckles. HA-ASF and/or Dyrk1A or Dyrk1A K188R were co-transfected into HeLa cells. After a 48-h transfection, the cells were fixed and immunostained by anti-HA and anti-Dyrk1A and followed by TRITC-anti-rabbit IgG or FITC-anti-mouse IgG, respectively. Hoechst was used for nucleus staining. **, p Ͻ 0.01 versus control group, # , p Ͻ 0.05 versus ASF group. that phosphorylation of ASF at Ser-227, Ser-234, and Ser-238 changes its localization from the nuclear periphery, a location of active splicing, to speckles, and in so doing, inhibits ASFmediated tau E10 inclusion, thereby altering the production of 3R-tau and 4R-tau. 3R-tau Is Increased and Is Correlated to Dyrk1A Overexpression in DS-In DS brain, as expected due to three copies of the Dyrk1A gene, the protein level of Dyrk1A is increased by 50% (Fig. 6a). To study whether the increased dosage of Dyrk1A in DS brain affects the proportion of 3R-and 4R-tau, we subjected brain homogenates from temporal cortices of six DS and six age-and postmortem interval-matched control cases (Table 1) to Western and immuno-dot blots (data not shown) using antibodies against 3R-tau (RD3), 4R-tau (RD4), or total tau (R134d, or a mixture of 43D and tau 5). We found that the level of tau protein was elevated in DS brains to ϳ3-fold of control cases (Fig. 6b). This increase was due to a selective increase in 3R-tau (Fig. 6c). The level of 3R-tau (normalized to total-tau) was 4-fold increases, whereas that of 4R-tau (normalized to totaltau) was decreased by ϳ25% in DS when compared with control brains (Fig. 6, c and d). Dephosphorylation of tau by alkalinephosphatase did not affect the labeling by these antibodies (data not shown), confirming that the antibodies were phosphorylation-independent. These results suggest that the proportion of 3R-tau is markedly increased and that the balance of 3R-tau and 4R-tau is disturbed in DS when compared with control brain. To determine whether increased 3R-tau was associated with neurofibrillary pathology, we compared 3R-and 4R-tau-specific immunostaining in temporal cortices from DS and control cases employing RD3 and RD4 antibodies (Fig. 6e). In control brain, RD3-or RD4-positive NFTs in the temporal cortex were rare (data not shown). However, NFTs in DS cases showed a predominant anti-3R-tau immunoreactivity (Fig. 6e, left panel). In comparable AD cases, the 3R-tau-positive NFTs were 2-3 times less than in similar area from subjects with DS (Fig. 6e, right panel). To learn whether the 3R-tau is predominant in sarkosyl-insoluble tau, we prepared a sarcosyl-insoluble fraction from control and DS brain extracts. In normal human brain, sarcosyl-insoluble tau was not detectable (data not shown). In DS brain, 3R-tau in the sarcosyl-insoluble fraction showed 3.1-fold enrichment when compared with 4R-tau (Fig. 6f), indicating that 3R-tau is predominant in sarcosyl-insoluble tau. These results suggest that neurofibrillary degeneration in the DS brain is primarily associated with 3R-tau. To study whether overexpression of Dyrk1A correlated with an imbalance of 3R-and 4R-tau, we plotted Dyrk1A levels with 3R-tau or 4R-tau levels in brain homogenate of individual case. We found a strong correlation between 3R-tau and Dyrk1A levels and an inverse correlation between 4R-tau and Dyrk1A levels (Fig. 6g), indicating that the overexpression of Dyrk1A in DS brain may contribute to an increase in 3R-tau/4R-tau ratio. DISCUSSION The amyloid pathology in DS brain is believed to be due to an extra copy of the APP gene. However, the presence of NFTs in trisomy 21 and the etiology of the neurofibrillary degeneration seen in DS remain elusive. A close look at the neurofibrillary degeneration of DS reveals a pattern of tau pathology reminiscent of several tauopathies constituting frontotemporal dementia. These tauopathies are caused by dysregulation of alternative splicing of tau E10, causing a shift in the ratio of 3R/4R tau. In the present study, we propose a novel pathogenic mechanism of tau pathology that clearly explains how the imbalance in 3R-tau and 4R-tau develops, which might cause neurofibrillary degeneration, along with brain amyloidosis, leading to early onset Alzheimer-type dementia in individuals with DS. We propose Dyrk1A, a kinase located in the Down syndrome critical region, as a key regulator of tau alternative splicing. By being overexpressed in the DS brain, Dyrk1A alters the nuclear distribution of splicing factor ASF by phosphorylating at Ser-227, Ser-234, and Ser-238, making it unavailable to the tau transcript, and causing a selective increase in 3R-tau level. The abnormal 3R/4R tau ratio resulting from defective alternative splicing of tau E10 is primarily responsible for the Alzheimer-type neurofibrillary degeneration seen in DS (Fig. 7). Tau E10 splicing is regulated by several SR or SR-like proteins, including ASF, SC35, 9G8, Tra2␤, and SRp55 (11, 12, 26 -28). We found ASF to be the most effective tau E10 splicing factor when compared with other SR proteins. ASF is known to bind to a polypurine enhancer on tau E10 and has been reported to play essential and regulatory roles in tau E10 inclusion (12). Its activity and subcellular localization are tightly regulated by its degree of phosphorylation. To date, four kinases, including SR protein kinase (SRPK) 1, SRPK2, Clk/Sty, and DNA topoisomerase-I, have been reported to phosphorylate ASF (14, 29 -31). Phosphorylation by SRPK1 drives ASF from cytosol into the nucleus, and phosphorylation by Clk/Sty causes release of ASF from speckles, the storage compartments of inactive SR proteins (16,32). Thus, both SRPK1 and Clk/Sty phosphorylate FIGURE 5. Mutations of ASF affect its subcellular location and role in tau exon 10 inclusion. a, subcellular location of ASF mutants. HA-tagged ASF S3A or ASF S3D was overexpressed in HeLa cells, and then the cells were fixed and immunostained with anti-HA and FITC-labeled secondary antibody. Hoechst was used for nucleus staining. b, Dyrk1A could not drive ASF S3A into speckles. HA-tagged ASF S3A (red) was co-expressed with Dyrk1A (green) in HeLa cells and immunostained as described above. c, effects of ASF mutations on tau exon 10 splicing. The mini-tau gene was co-transfected with ASF S3A or ASF S3D into COS7 cells, and tau exon 10 splicing was analyzed by RT-PCR after a 48-h transfection. *, p Ͻ 0.05 versus control (Con) group. ASF and recruit it into nascent transcripts, resulting in enhancement of its role in regulation of alternative splicing. Our data identify Dyrk1A as a novel ASF kinase. We have found that Dyrk1A phosphorylates ASF mainly at three sites (Ser-227, Ser-234, and Ser-238) within the consensus motif (RX(X)(T/S)P) of Dyrk1A, none of which are known to be phosphorylated by the other four ASF kinases. In addition to ASF, Dyrk1A also phosphorylates other splicing factors and regulates their activity (20,21). However, whether phosphorylation of these factors plays any role in the alternative splicing of tau is not known yet. In the present study, we further show that upon phosphorylation by Dyrk1A, ASF is recruited from the nuclear periphery to the speckles, an event that corresponds to a dramatic decrease in tau E10 inclusion. However, the level of ASF remains unchanged in the DS brain (see supplemental Fig. 3). Therefore, we believe that it is the phosphorylation of ASF at Ser-227, Ser-234, and Ser-238 by Dyrk1A that causes dysregulation of tau E10 alternative splicing, leading to an imbalance in 3R-tau and 4R-tau, ultimately laying foundations for the development of neurofibrillary degeneration in DS. Although hyperphosphorylation of tau plays a fundamental role in the development of Alzheimer-type neurofibrillary degeneration, imbalance in the cellular levels of 3R-and 4R-tau is emerging as an important concept in this pathology. Several lines of evidence, from transgenic mouse models to human tauopathies, emphasize the importance of a critical 3R/4R tau ratio in the cell, which, if disturbed for a given species, may lead to the characteristic neurofibrillary pathology. For example, overexpression of human 3R-tau, but not 4R-tau (33) in mice (where 4R-tau is the only isoform expressed in adult life), produces age-dependent tauopathy (34), whereas in adult Ts65Dn mouse, a transgenic mouse model of DS, there is no evidence of neurofibrillary pathology (35), although tau phosphorylation is increased at several sites (36). Because E10 becomes a constitutively expressed exon in the adult mouse, overexpression of six human tau isoforms in the transgenic mouse 8c model does not produce neurofibrillary degeneration; 3R/4R balance in the Tg 8c mouse is maintained by endogenous 4R-tau (37). However, FIGURE 6. Increased 3R-tau in DS brain correlates with overexpresson of Dyrk1A. a, Dyrk1A level was increased by ϳ50% in DS brain. Immuno-dot blots of temporal cortical homogenates from six DS and six control (Con) cases were developed with monoclonal antibody 8D9 to Dyrk1A and quantitated by densitometry. b, total tau was increased in DS brain. Total tau level in temporal cortical homogenates from six DS and six control cases was detected by Western blots with polyclonal antibody R134d and quantitated by densitometry. c and d, the levels of 3R-tau and 4R-tau were altered in DS brain. 3R-tau and 4R-tau in the same samples were measured by Western blots with anti-3R-tau (RD3) and anti-4R-tau (RD4), respectively, quantitated by densitometry, and normalized by total tau level. For the Western blots shown in panels b-d, the amount of proteins applied in homogenates of DS cases was 30% of that in the control cases. **, p Ͻ 0.01, and *, p Ͻ 0.05. e, NFTs in DS brain were mainly 3R-tau-positive. Series sections from the temporal inferior gyrus of a 58-year-old DS subject (case number 1139) (DS) and AD subject with comparable severe AD (global deterioration scale, stage 7) (AD) were immunostained with anti-3R-tau or anti-4R-tau. f, sarcosyl-insoluble tau in DS brain was dominantly 3R-tau. DS brain extracts were adjusted to 1% N-lauroylsarcosine and 1% ␤-mercaptoethanol and incubated for 1 h at room temperature. Sarcosyl-insoluble tau was collected from the pellet of 100,000 ϫ g centrifugation at 25°C and dissolved in Laemmli sample buffer. The brain extracts and the sarcosyl-insoluble protein were analyzed proportionally for the levels of 3R-tau and 4R-tau by Western blots. The levels of 3R-tau and 4R-tau in the sarkosyl-insoluble fraction were normalized with taus in the brain extracts, and the level of 4R-tau was designated as 100%. The data are presented as mean Ϯ S.D. **, p Ͻ 0.01, g, correlation of 3R-tau or 4R-tau level (x axis) with Dyrk1A level (y axis). The levels of 3R (left) or 4R-tau (right) measured in c or d were plotted against the level of Dyrk1A measured in e. Open circles are control cases, and closed circles are DS cases. when the same 8c mouse is crossed with a tau knock-out mouse, the resultant htau mouse now has more 3R-tau than 4R-tau and thus displays abundant neurofibrillary pathology (38). We see a similar pattern of a disturbed 3R/4R balance in several human tauopathies. Close to 50% of all mutations in the tau gene causing human FTDP-17 affect tau exon E10 splicing and alter 3R/4R tau ratio (10). Most of these mutations (N279K, L284L, ⌬N296, N296N, N296H, P301S, S305N, S303S, E10 ϩ 3, E10 ϩ 11, E10 ϩ 12, E10 ϩ 13, E10 ϩ 14, and E10 ϩ 16) increase tau exon E10 inclusion and raise the normal 4R/3R ratio from 1 to 2-3 (39). Other tau gene mutations such as ⌬K280, E10 ϩ 19, and E10 ϩ 29 decrease E10 inclusion and reduce the 4R/3R ratio (40 -42). In addition to FTDP-17, dysregulation of E10 splicing may also contribute to other human tauopathies, such as Pick disease (with a predominant increase in 3R-tau), progressive supranuclear palsy (4R-tau up-regulation), and corticobasal degeneration (4R-tau up-regulation) (43). 4R-and 3R-taus have inherent differences in regulating microtubule dynamics, and the pathology triggered by a 3R/4R tau imbalance may be differentially affected by the particular tau isoform. For example, 3R-tau is known to bind to microtubules with lesser efficiency than the 4R-tau (44 -46). We observed an ϳ4-fold increase in 3R-tau in the DS brain and believe that the resultant free 3R-tau becomes a readily available substrate for hyperphosphorylation by tau kinases. In the case of human tauopathies with a low 3R/4R ratio, 3R-tau probably becomes free due to the high affinity binding of 4R-tau to microtubules. Free tau has been shown to be a favorable substrate for abnormal hyperphosphorylation than its microtubule-bound counterpart (47). In DS brain, the situation is further compounded by the fact that Dyrk1A not only causes a 3R/4R imbalance, but being a tau kinase, also leads to hyperphosphorylation of tau at many sites including Thr-212. Phos-phorylation of tau by Dyrk1A can prime tau for further abnormal hyperphosphorylation by glycogen synthase kinase-3␤ (36). The high levels of A␤ in DS brain due to an extra dose of the APP gene may also significantly accelerate tau pathology by activating tau kinases including glycogen synthase kinase-3␤ (GSK-3␤) (48). It has been shown that infusion of A␤ in tauP301L transgenic mice causes neurofibrillary degeneration, and the double transgenic mice generated by crossing Tg2567 (swAPP) animals with tauP301L animals have significantly increased neurofibrillary pathology (49,50). Thus, a multifaceted effect of an extra copy of Dyrk1A and APP gene in DS may help explain the occurrence of neurofibrillary degeneration ϳ20 years earlier than in AD. Apart from AD-like pathology, 50% of DS patients have one or more forms of congenital cardiac anomalies. Interestingly, a recent report suggested the fundamental role of ASF in programming excitation-contraction coupling in the cardiac muscle (51). Moreover, ASF was recently identified as an onco-protein in a study examining a number of solid organ tumors, which may provide insight into the 20-fold increase in the incidence of leukemia in DS patients when compared with the general population (52). Thus, Dyrk1A-induced changes in ASF activity may elucidate the molecular biology underlying some of the tell tale features of DS. In summary, the present study provides novel insights into the etiology of neurofibrillary pathology and presents intriguing data about the molecular basis of the 3R/4R imbalance seen in the DS brain. We show that Dyrk1A level is increased in DS brain due to an extra copy of the Dyrk1A gene located in the Down syndrome critical region. We further show that Dyrk1A phosphorylates the splicing factor ASF at three sites, causing a shift in its nuclear localization and making it unavailable for tau E10 splicing. This results in severe up-regulation of 3R-tau mRNA and is directly responsible for 3R/4R tau imbalance in the DS brain. Dyrk1A may further accelerate neurofibrillary degeneration by contributing to tau hyperphosphorylation. Regulation of ASF-mediated tau E10 alternative splicing by Dyrk1A opens new avenues for research and development of therapeutics for tauopathies. inclusion. Overexpression of Dyrk1A phosphorylates ASF at Ser-227, Ser-234, and Ser-238, which drives it into speckles from nascent transcripts and leads to E10 exclusion and increase of 3R-tau production. Increased 3R-tau disrupts the balance of 3R-/4R-tau required for normal function of adult human brain and aggregates in affected neurons, which initiates and/or accelerates the formation of NFTs and develops tauopathy in DS brain. ESS, exonic splicing silencer.
v3-fos-license
2021-08-28T06:17:17.447Z
2021-08-01T00:00:00.000
237316307
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2218-273X/11/8/1210/pdf", "pdf_hash": "0b015dc82cf85f2bc1f039b26940278244872bbd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41619", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "cf9fdde196aa97596af1d4c0aa65e3ddaa094c40", "year": 2021 }
pes2o/s2orc
The Effects of Hyperbaric Oxygenation on Oxidative Stress, Inflammation and Angiogenesis Hyperbaric oxygen therapy (HBOT) is commonly used as treatment in several diseases, such as non-healing chronic wounds, late radiation injuries and carbon monoxide poisoning. Ongoing research into HBOT has shown that preconditioning for surgery is a potential new treatment application, which may reduce complication rates and hospital stay. In this review, the effect of HBOT on oxidative stress, inflammation and angiogenesis is investigated to better understand the potential mechanisms underlying preconditioning for surgery using HBOT. A systematic search was conducted to retrieve studies measuring markers of oxidative stress, inflammation, or angiogenesis in humans. Analysis of the included studies showed that HBOT-induced oxidative stress reduces the concentrations of pro-inflammatory acute phase proteins, interleukins and cytokines and increases growth factors and other pro-angiogenesis cytokines. Several articles only noted this surge after the first HBOT session or for a short duration after each session. The anti-inflammatory status following HBOT may be mediated by hyperoxia interfering with NF-κB and IκBα. Further research into the effect of HBOT on inflammation and angiogenesis is needed to determine the implications of these findings for clinical practice. Introduction Since the adjunctive use of hyperbaric oxygen therapy (HBOT) was first described in 1879 [1], it has been further explored and is nowadays a widely accepted treatment in several diseases, such as delayed radiation injury, diabetic foot ulcers, carbon monoxide poisoning, decompression sickness and arterial gas embolism [2]. The Undersea and Hyperbaric Medical Society (UHMS) describes HBOT as an intervention whereby patients breathe near 100% oxygen while being pressurized to at least 1.4 atmosphere absolute (ATA) in a hyperbaric chamber [1]. Currently, the UHMS has accepted 14 indications for HBOT [3], yet new applications of HBOT have been described, including preconditioning for surgery [4][5][6][7]. Several cohort studies and randomized controlled trials, executed in different surgical procedures (e.g., abdominoplasty and pancreaticoduodenectomy), reported lower postoperative complication rates and a reduced length of stay on the intensive care unit after preoperative HBOT [4][5][6][7]. As the occurrence of postoperative complications is associated with worse short-term and long-term outcomes [8], a decrease in psychosocial well-being [9] and higher healthcare costs [10], HBOT may prevent those adverse effects of surgery. To realize this perioperative protective effect, HBOT must be able to prevent infection and increase wound healing. It is likely that oxidative stress, which has been confirmed to be the main effect of HBOT [11], plays an activating role in the mechanisms underlying the therapeutic pathway of preconditioning for surgery with HBOT. An increase in reactive oxygen species (ROS) levels is associated with enhanced pathogen clearance [12]. Furthermore, ROS induce the synthesis of several growth factors, such as vascular endothelial growth factor (VEGF), placental growth factor (PGF) and angiopoietin (Ang) 1 and 2 and recruit stem cells from the bone marrow, which are responsible for neovascularization [13]. However, a frequently mentioned argument against the use of HBOT revolves around the induction of oxidative stress as well, since higher levels of ROS and reactive nitrogen species (RNS) may lead to oxidative and nitrosative damage, mitochondrial aging, genotoxicity and maintenance of (chronic) inflammation [14][15][16]. The aim of this review is to gain more insight into the mechanisms of HBOT by assessing its effect on oxidative stress, inflammation and angiogenesis markers in humans. More insight into these effects of HBOT will predict and underpin the outcome of innovative uses of HBOT and balance its benefits against potential damage. No systematic overview of research into these parameters in human beings has yet been published. Methods A search of the literature was performed in MEDLINE and EMBASE on 2 November 2020. Key terms used in the search were 'hyperbaric oxygen' and 'oxidative stress', 'inflammation', or 'wound healing'. The results were not restricted as no filters were applied. The detailed literature search can be found in Appendix A (see Tables A1 and A2). All studies found were screened on title and abstract by one reviewer (S.D.D.W.), who excluded those studies that met any of the following criteria: (1) absence of abstract, (2) congress abstract, errata or guideline, (3) case report (defined as five or less patients), (4) narrative review, (5) animal research, (6) no treatment with HBOT, or (7) one of the following outcome measures: cure, complication rate, or a disease-specific outcome parameter. The same reviewer assessed the full-text of the remaining studies. The following inclusion criteria were applied: (1) measurement of at least one marker of oxidative stress, inflammation, or angiogenesis before and after HBOT, (2) study in humans (or human material) and (3) English full-text available. EndNote X9 was used to keep track of the screening process. The included studies were divided into an "in vivo" and "in vitro" group. In vivo studies were performed in a clinical setting in which all subjects were at least pressurized once, whereas in vitro studies obtained human material what was subsequently exposed to HBOT. Information on first author, publication year, investigated parameters and patient (in vivo)/sample (in vitro) characteristics and results (solely of the parameters of interest) were extracted. Outcomes of statistical tests with a p-value < 0.05 were considered significant. All information was extracted by hand and documented in Microsoft Excel (v16.0). Oxidative Stress In total, 74 articles reporting on the effect of HBOT on oxidative stress were found. Subjects mainly received one session of HBOT in a hyperbaric chamber pressurized to 2-2.5 ATA (203-253 kPa), yet in seven studies a wet exposure to hyperbaric oxygen (i.e., a dive) to up to 6 ATA (608 kPa) was employed. Nearly 40% (n = 21) of the clinical studies were conducted in healthy volunteers (see Table A3). Catalase, glutathione peroxidase (GPx), malondialdehyde (MDA), nitric oxide synthase (NOS), ROS, RNS, superoxide dismutase (SOD) and thiobarbituric acid reactive substances (TBARS) were the most frequent markers of interest (see Table 1). A clear stimulating effect of HBOT on ROS (see Table 1) was found. Nonetheless, two out of the three studies assessing hydrogen peroxide described lower concentrations after HBOT [33,42] (see Table A3). NOS and RNS concentrations seem to increase after HBOT as well, although this effect was less pronounced, which can be explained by a repeatedly reported decrease in exhaled nitric oxygen [61,69,70]. Timing of sampling may also play a role, as several articles only noted an increase in inducible NOS or nitrite three hours after the end of an HBOT session [34,49,55]. Not only the presence of NOS, RNS and ROS has been investigated, but also their effects on lipids, proteins, carbohydrates and DNA/RNA (see Table 1). Little research has been done regarding protein and carbohydrate modifications following HBOT, but no effect or a stimulating effect on lipid peroxidation, resulting in MDA and other aldehydes (TBARS), has been reported in various studies. DNA-damaging effects of HBOT were not demonstrated employing the most commonly used DNA-lesion-marker 8-hydroxydeoxyguanosine [146]. Concerning the concentrations of anti-oxidative enzymes that protect against the potentially harmful effects of oxidative stress, such as catalase, SOD and GPx, conflicting results were found (see Table 1). In general, no effect or an indication for an increasing effect of HBOT on the enzyme activity of those antioxidants has been demonstrated. HBOT may have a uniform effect on SOD and catalase, as most of the studies reported increased, decreased, or stable SOD and catalase levels and, thus, no differences in effect of HBOT between these two enzymes [76,80,81,83,85,94,97,100]. However, a difference between SOD and/or catalase concentrations in respectively plasma and erythrocytes has been reported [55,62,63]. Benedetti et al. [80] and Dennog et al. [94] describe no effect of HBOT on the free radical trapping anti-oxidants with an exogenous origin, such as vitamin A, vitamin C and vitamin E [149]. Inflammation Of the 140 studies included, 58 articles describing inflammatory markers were identified. Most of the research included at least three HBOT-sessions, yet study protocols consisting of 20-40 sessions were common, in particular in articles reporting acute-phase proteins (see Table A4). Popular variables of interest were interleukins (IL) (n = 31), acutephase proteins (n = 26) and tumor necrosis factor-α (TNF-α) (n = 25) (see Table 2). Concerning acute phase proteins, a decreasing effect of HBOT on (high-sensitivity) C-reactive protein ((hs-)CRP) was found as 75% (n = 12) of the studies investigating (hs-) CRP reported lower concentrations post-HBOT. Strikingly, HBOT may have a stimulating impact on granulocyte-colony stimulating factor and an inhibiting effect on insulin-like growth factor-1, both reflecting a pro-inflammatory state [150] (see Table 2). In line with the outcomes regarding (hs-)CRP and interleukins, an anti-inflammatory effect of HBOT was also shown by decreasing levels of the pro-inflammatory cytokines interferon-γ (IFN-γ), nuclear factor kappa B (NF-κB) and TNF-α (see Table 2). However, HBOT may have an initial pro-inflammatory effect, as some studies described an increase in TNF-α during or shortly after HBOT [87,127,134]. Angiogenesis Concerning the angiogenesis research, 34 studies were found in addition to the earlier mentioned studies reporting on interleukins, interferons, insulin-like growth factor 1 (IGF-1), NF-κB and TNF-α. Most of the articles described angiogenesis-inducing cytokines or growth factors and were performed in clinical setting (n = 20). However, five out of seven studies on downstream effectors of angiogenesis were conducted in vitro (see Table A5). Epidermal growth factor (EGF), extracellular signal-regulated kinase (ERK), (basic) fibroblast growth factor, tumor growth factor-β (TGF-β), VEGF, IFN-γ, IL-6, IL-8, NF-κB and TNF-α (see Tables 2 and 3) were the only angiogenesis markers reported in at least five articles. HBOT most likely has a stimulating effect on various growth factors involved in angiogenesis (i.e., EGF, hematopoietic growth factor, keratinocyte growth factor, PGF and VEGF). This effect may only be present shortly after the intervention, since several studies with repeated HBOT sessions described no differences in pre-HBOT values or only a raise after the first session (and not after following sessions) [62,141,147] (see Table A5). Whereas for some angiogenesis-stimulating cytokines, such as stromal cell-derived factor-1α, a similar increasing effect of HBOT was found, no or an inhibiting effect on TGF was seen. HBOT seems not to affect the cytokine receptors (see Table 3). As HBOT causes an increase in angiogenesis-promoting growth factors and cytokines, one would also expect a stimulating effect on the downstream effectors of blood vessel formation. However, inconsistent outcomes were reported (see Table 3). The phosphatidylinositol-3 kinase (PI3K)/AKT pathway was upregulated and the ERK and p38 mitogen-activated protein kinase (p38 MAPK) pathways were downregulated. Therefore, HBOT effects on downstream effectors of blood vessel formation seem to differ depending on the intracellular effector route. Discussion This review is the first to systematically summarize the effect of HBOT on oxidative stress, inflammation and angiogenesis markers in human beings. HBOT increases the levels of oxygen radicals, which induce oxidative stress. An anti-inflammatory action of HBOT was demonstrated by decreasing concentrations of several pro-inflammatory markers. Furthermore, HBOT seems to stimulate the release of angiogenesis-promoting cytokines, including growth factors. In the light of previous research, reporting a link between oxidative stress and a proinflammatory state [152][153][154], it is remarkable that HBOT leads to a more anti-inflammatory state. However, these findings do correspond with studies into the effects of HBOT using thermal imaging, in which a decrease in wound temperature was found [155,156]. This temperature reduction could indicate a local decline in inflammation. This antiinflammatory effect is likely mediated by the inhibition of NF-κB, a transcription factor for pro-inflammatory genes [157][158][159]. A direct anti-inflammatory action of HBOT seems less probable, since no differences in the concentrations of anti-inflammatory markers (except IL-1Ra) were noted. Although beyond the scope of this review, Yu et al. [160] have shown in an animal model that HBOT decreases the NF-κB concentrations by higher release of IκBα, which is an inhibitor of NF-κB and degrades under hypoxic circumstances [161]. An increase in IκBα along with a decrease in NF-κB after HBOT was also seen in the only study in the current review reporting on IκBα [52]. Therefore, hyperoxia generated during HBOT may stimulates the preservation of IκBα and thereby inhibits NF-κB release, resulting in less gene transcription of pro-inflammatory cytokines and, thus, an anti-inflammatory state despite oxidative stress. NF-κB is not only a crucial transcription factor in inflammation, but also plays a role, together with HIF-1α, in the induction of angiogenesis. Growth factors and other angiogenesis-promoting cytokines induce new vessel formation by increased expression of pro-angiogenesis genes, which is mediated by NF-κB or (under hypoxia) HIF-1α [162,163]. Since the current review demonstrates an inhibiting effect of HBOT on both transcription factors and little research, with contradicting outcomes, into the downstream effectors of angiogenesis (i.e., PI3K, Akt, p38 MAPK, ERK) has been done, it is unclear how increased levels of pro-angiogenesis growth factors and cytokines actually induce increased tube formation, as shown by Anguiano-Hernandez et al. [125], Lin et al. [130] and Shyu et al. [40]. Thus, further research into the relation between NF-κB, HBOT and the angiogenesis pathways is needed. Another striking finding concerning angiogenesis is that several articles reported an increase in growth factors only or particularly after the first HBOT session [40,141,147], while it is common to conduct 20-40 sessions for chronic non-healing wounds or radiationinduced tissue injury (indications strongly relying on the angiogenesis effects of HBOT) [2]. Furthermore, Sureda et al. [62] describe, in the only in vivo study assessing the effect of HBOT on growth factors at several time points during follow-up, an increase in VEGF immediately after each session, yet VEGF levels determined pre-session #5 and #20 were similar to the baseline (pre-session #1) value. Those findings possibly suggest a short pro-angiogenesis effect of HBOT. However, due to a shortage of studies reporting on angiogenesis markers on a daily or weekly basis during a treatment protocol including 20-40 sessions, it remains unclear which markers are involved in this short-term effect of HBOT and whether other factors play a role in this angiogenesis process. The aim of this review was to gather a comprehensive overview of the effects of HBOT on oxidative stress, inflammation and angiogenesis. We must conclude that existing research does not allow for a complete understanding of the physiology underlying new promising treatment modalities for HBOT, such as preconditioning for surgery. Due to the heterogeneity of included patient populations and the inclusion of studies in healthy vol-unteers, it is difficult to extrapolate findings to the surgical patient in general. Furthermore, this review did not focus on clinical outcomes related to inflammation, angiogenesis and oxidative stress, making it impossible to determine the implications of the described findings in practice. In conclusion, hyperoxia and oxidative stress induced by HBOT affect inflammation and angiogenesis markers, but whether hyperoxia and oxidative stress induce a clinically relevant decrease in inflammation and increase in angiogenesis remains unclear and needs to be further investigated before innovative interventions can be widely applied.
v3-fos-license
2018-12-05T09:19:49.909Z
2016-04-30T00:00:00.000
55592216
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journals.uran.ua/index.php/1991-0177/article/download/67060/62688", "pdf_hash": "9292f510fe43e4a59c297a1681bd48c888884ecc", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41621", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9292f510fe43e4a59c297a1681bd48c888884ecc", "year": 2016 }
pes2o/s2orc
Sports selection of volleyball players : genetic criteria to define motor endowments ( information 2 ) Provisions of sports genetics were realized practically in the system of the individual prediction of the development of various signs and abilities of a person and used successfully at various stages of sports training and selection. Practical criteria of the individual prediction are data on family sports endowments, features of genetic conditionality of signs (morphological, motive, psychophysiological) in the development, identification of the genetic markers which are defining predisposition to certain activity of a person or development of signs. Sports genetics is rather young science.Its development is intensively carried out in Ukraine [5; 6], abroad -Canada, the USA [11; 13], Russia [2; 10].The course for students of specialty physical education and sport on sports genetics is developed and taught in Ukraine. Provisions of sports genetics were realized practically in the system of the individual prediction of the development of various signs and abilities of a person and used successfully at various stages of sports training and selection.Practical criteria of the individual prediction are data on family sports endowments, features of genetic conditionality of signs (morphological, motive, psychophysiological) in the development, identification of the genetic markers which are defining predisposition to certain activity of a person or development of signs. The essence of genetic marking is explained with the following regularities.The gene coding a certain property which is shown at late stages of ontogenesis is sometimes closely linked (or it is in a genetic zone of the same chromosome; pic. 1) with other gene (marker) which is forming external, easily observed sign already at the birth.The signs, which are controlled by them, tend to be inherited together, when coupling genes. The graphic card of distribution of the genes on chromosomes which are controlling good health and physical development of a person is shown in pic. 1. 170 genes and genetic zones are given in the card, which are connected with the interesting us signs and features of physical development, the number of which increases constantly with the development of biological science. It is possible to judge not only the existence, but also the lack of predisposition in the development of the studied sign of a person at the identification of a sign-marker [4]. However studying of genetic markers of endowments to high achievements in separate sports is studied not enough. communication of the research with scientific programs, plans, subjects The work is performed in the compliance "The consolidating plan of the research work in the sphere of physical culture and sport for 2011-2015" of the Ministry of Ukraine for family, youth and sport on the subject "Theoretic-methodical bases of individualization of the educational and training process in game sports" (No. of the state registration is 0112U002001). the purpose of the researches To define genetic criteria which are possible for using at the selection of the gifted volleyball players. Material and Methods of the research Methods of the theoretical analysis and generalization, the system analysis, the genealogical method of genetics, methods of the dermatoglyphic and serologic analysis were used in the work.50 high-class female volleyball players, 50 girls of the general population who didn't engage in sports, at the age of 20-29 years old took part in the researches. SlobozhanSkyi herald of Science and Sport results of the research and their discussion Genealogical researches.It turned out in the genealogical research of the qualified female volleyball players that parents of sportswomen had often high physical activity and good results in different types of sport in young years.It was revealed that sportswomen have 56,4% of fathers and 32,7% of mothers who engaged in sports earlier.Whereas there were 27,8% and 11,4% in the compared group of youth at 20-29 years old which don't engage in sports according to fathers and mothers who were sportsmen earlier.In 8,3% families of the qualified female volleyball players both parents played sports earlier, and not sportsmen of such families had only 2,8%. These results can be compared to earlier conducted researches of R. Kovár [12].Results of researches on sports activity of parents of outstanding sportsmen of different types of sport are given in tab. 1.As we see, family enthusiasm for sport of probands -volleyball players in many respects coincides with family motive endowments of representatives also of other populations and sports.This genetic regularity allows claiming that family motive endowments can be informative criterion in the system of sports selection of young volleyball players. Dermatoglyphic researches.Three main papillary patterns of fingers (pic.2) were defined in the researches: A -arches, Lloops, W -whorls, and also the fourth option of difficult (compound) dermatoglyphic patterns of fingers of hands (type LW) (pic. . Main types of papillary patterns of fingers: a -an arch, the number of deltas exactly 0, a numerical indicator of combs is equal to 0; -a loop, the number of deltas -1, a numerical indicator -13; b -a whorl, the number of deltas -2, a numerical indicator -17 (according to the bigger left miscalculation) The qualified volleyball players had the following distribution of types of patterns of fingers (tab.2) in comparison with the control group of not training women.We see the essential distinction of percentage of arc dermatoglyphs at two groups of the investigated.The occurrence of the simplest patterns is more (18,7%) at women of the general population, than at sportswomen is (8,5%).There aren't essential distinctions on loopback patterns at two investigated groups (U+R=59,3 and 58,1% respectively at the engaged and not engaged in sports).At the same time patterns distribution of difficult (whorl) differ at the investigated.Sportswomen have more frequent occurrence of difficult patterns (W+LW=32,2%), than at women of the general population (W+LW=23,2%). SlobozanS'kiJ naUkoVo-SportiVniJ ViSnik This work is licensed under a Creative Commons 4.0 International (CC BY 4.0) Comparing these results to our previous researches (L.P. Serhiyenko, 1995; L. Serhiyenko, 1999), we will note the following (tab.4).The children, who are having higher development of high-speed abilities, (the ability which is basic for volleyball players) have a big occurrence on fingers of hands of difficult patterns (type W) and a smaller occurrence of simple patterns (type A).These distinctions are even more expressed (from 12,8 till 27,3%) when comparing the sportsmen -sprinters with fingers who don't engage in sports.For example, it is revealed from 5 till 8 of whorl types of patterns on two hands at masters of sports -men [9]. The similar indicators are received in many respects in the researches of T. F. Abramova, T. M. Nikitina, N. N. Ozolin [1]. The above-stated material allows claiming that it is possible to use the following informative dermatoglyphic criteria at the sports selection of young volleyball players: -type of patterns of fingers of hands.The quantity of whorl patterns on two hands has to make from about 30 to 40% at the gifted volleyball players; the occurrence of difficult (whorl) patterns will be most often within 20-25% at the children who aren't predisposed to this entrance of sport; -the total comb account on two hands (TRC) can be the second criterion of dermatoglyphic.As a rule, it is ranging from 140 till 160 combs at the children who are predisposed to volleyball classes, and at the children who don't have such predisposition -ranging from 120 till 130 combs. Serologic researches.Blood groups of the system AB0 and a Rhesus factor of female volleyball players and people of the general population were studied in the serologic researches.The data were undertaken from medical records of participants of the researches. The distribution of blood groups at the qualified female volleyball players is presented in tab. 5.The distribution of blood groups is given in the control group and people of the Ukrainian population for comparison in this table.Comparisons show that blood group I(0) occurs most often at the qualified female volleyball players.It is twice more often observed at sportswomen, than in the control group of women, and for 16% in comparison with the population data.The insignificant percentage is noted on the II(A) blood group at female volleyball players.Women of the control group and people of the general population have insignificant differences.The third blood group exceeds occurrence of III(B) at sportswomen, as in the control group, and population almost twice.The fourth blood group occurs rather seldom at all people, besides female volleyball players with such blood group weren't revealed at all. SlobozhanSkyi herald of Science and Sport The presence of a Rhesus factor at the examined sportswomen to the control group of women who didn't engage in sports is given in table 6.As we see, female volleyball players have generally positive Rhesus factor (+Rh). Comparing the obtained data with the generalized results of the serologic researches (L.P. Sergienko, 2004), we will note that the I(0) blood group, as a rule, is associated with high development of high-speed and power abilities and most often occurs at sportsmen of high-speed strength sports.This blood group is a genetic marker of good health and considerable prospects to physical development.The third blood group III(B), as a rule, meets at the people who are more often having high coordination abilities.It is associated with motive activity which provides complex manifestation of motive abilities in the changing situations (for example, such which occur in sports).We will remind that high-speed and power and coordination abilities are basic sports success of volleyball players.The positive Rhesus factor, as a rule, characterizes a high predisposition of a person to the development of anaerobic efficiency [7]. The above-stated results of the serologic researches allow claiming that informative criteria of high prospects to classes at the individual forecast in the system of sports selection by volleyball can be: -existence of I(0) or III(B) of a blood group.Besides, in our opinion, sportsmen with the I(0) blood group can be more perspective as forwards, and with the III (B) blood groupsetter; -existence of a positive Rhesus factor (+Rh) at occurrence of I(0) and III(B) of blood groups. It is methodologically justified to carry out the genetic prediction of prospects of young volleyball players at the second and third stages of sports selection.Features of the development of morphological features, motive abilities and family sports endowments are defined at the second stage.And genetic markers are used in the system of the sports prediction at the third stage of sports selection (selection for the improvement in a certain sport is carried out here). The regularities which were received on the selection of female volleyball players, in our opinion, can be extrapolated to the man's contingent of sportsmen.conclusions 1.The results of the genealogical researches allow claiming that family motive endowments can be informative criterion in the system of sports selection of young volleyball players. 2. Dermatoglyphic criteria in the individual prediction of motive endowments of volleyball players are: -existence of difficult type of dermatoglyphic pattern of fingers of hands.The quantity of whorl patterns on two hands has to make from 30 till 40% at the gifted volleyball players; -existence of the bigger, than on average in population number of the total comb account on two hands (TRC).As a rule, it is ranging from 140 till 160 combs at the children who are predisposed to classes with volleyball. 3. Blood groups of the system AB0 can be criteria of predisposition to volleyball classes.A serologic marker can be I(0) and III(B) of blood group at a positive Rhesus factor (+Rh) at the perspective volleyball players.Sportsmen with the I(0) blood group are more predisposed to performance of functions of forwards, and with the III(B) blood group to performance of functions of setters. prospects of further researches Further the researches of features of formation at the gifted volleyball players of such genetic markers can be of interest: iridologic, odontologic, morphometric, molecular. ©Alisa Ablikovа, Leonid Serhiyenko, 2016 3).Two options of loopback patterns were compared: Uan ulnar loop which is open in the ulnar (fibular) part and R -is open in the radial (tibial) part.The quantity of combs on separate fingers of the right and left hands and totally on right, left and two hands were counted.It is possible to get acquainted with a full technique of the analysis of dermatoglyphic of fingers of hands in the monograph of L. P. Sergienko [9]. 3 . Various types of difficult (compound) dermatoglyphic patterns of fingers of hands: a -a double loop (TL conventionally), b -a lateral pocket loop (LPL conventionally), c -three-deltoid patterns (ACC conventionally)The local distribution of the comb account on separate fingers of the right and left hand was defined at two groups of the investigated in the researches (tab.3).The average amount of occurrence of quantity of combs is from 12 till 20 on separate fingers at sportswomen, and at women of the general population is from 10 till 17.The total quantity of combs on the right and left hand (TRC) also differs at sportswomen and women of the general population: respectively 154,6 and 128,5.Separately the essential distinctions are revealed on 4 fingers: RC-1 the left hand, RC-2 the right hand, RC-3 -the right and left hands.In all cases sportswomen had big absolute measures of the comb account, than at the women who don't engage in sports.TRC variations was within 140-160 at sportswomen, and at women of the general population -120-130 (the level of distinctions is high р<0,01).
v3-fos-license
2021-05-30T05:09:51.448Z
2021-05-01T00:00:00.000
235240000
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/14/10/2681/pdf", "pdf_hash": "fd60c279323ca91de771def1299f0a163d0d087b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41626", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "fd60c279323ca91de771def1299f0a163d0d087b", "year": 2021 }
pes2o/s2orc
Selective Laser Melted M300 Maraging Steel—Material Behaviour during Ballistic Testing Significant growth in knowledge about metal additive manufacturing (AM) affects the increase of interest in military solutions, where there is always a need for unique technologies and materials. An important section of materials in the military are those dedicated to armour production. An AM material is characterised by different behaviour than those conventionally made, especially during more dynamic loading such as ballistics testing. In this paper, M300 maraging steel behavior was analysed under the condition of ballistic testing. The material was tested before and after solution annealing and ageing. This manuscript also contains some data based on structural analysis and tensile testing with digital image correlation. Based on the conducted research, M300 maraging steel was found to be a helpful material for some armour solutions after pre- or post-processing activities. Conducted solution annealing and ageing increased the ballistic properties by 87% in comparison to build samples. At the same time, the material’s brittleness increased, which affected a significant growth in fragmentation of the perforated plate. According to such phenomena, a detailed fracture analysis was made. Introduction Military usage of AM has become increasingly popular because of the significant growth of different technologies and dedicated materials for those technologies [1][2][3][4]. Until now, AM technologies mainly were used in military solutions for rapid prototyping (RP), and rapid tooling (RT) to a limited extent [5][6][7][8]. Recently, the popularity of using AM technologies in military solutions has increased, especially from the applicability point of view. One example is Kristoffersen et al.'s [9] research results, where ballistic perforation resistance of AM AlSi10Mg aluminium plates were taken into account. In comparison to AM materials and conventionally made materials, the authors determined that their ballistic appearance was negligible. The authors registered residual velocity for die-cast plates at a value of 230.7 m/s, while the AM was 231.6 m/s. The obtained results are very encouraging, especially considering that lightweight materials (aluminium and titanium-based) are an interest of many research facilities [10][11][12][13][14][15][16][17][18][19]. A different approach was suggested by Zochowski et al. [20], where the authors analysed ballistic properties of bulletproof vest inserts made of AM-made titanium alloy. Their research results indicated a significant capability of absorbing and dissipating the projectile impact energy from about 500 J to 0 J in 0.15 s. In Hassanin et al.'s [21] work, the authors proved an increased, higher ballistic performance of AM metamaterial characterised by the shape memory, superelasticity, and negative Poisson's ratio properties compared to conventional steel Material The material used for samples manufacturing was gas atomized M300 maraging steel (Carpenter Additive, Philadelphia, PA, USA). Our own scanning electron microscopy (SEM) observations ( Figure 1) indicated that powder particles were characterised by spherical shapes (diameters of 20-63 µm) with some satellites on the external surface. higher ballistic performance of AM metamaterial characterised by the shape memory, superelasticity, and negative Poisson's ratio properties compared to conventional steel armours. They improved the energy absorbed by unit mass from 91.28 and 252.35 N•mm/g in the case of solid steel and solid NiTi, respectively, to 495 N•mm/g for the optimized NiTi, AM auxetic structure. Different usage of AM in military solutions was suggested by Limido et al., where the authors attempted to design a warhead penetrator using lattice structures to reduce its mass and guarantee, at the same time, its mechanical strength and performances, or even improve them. In their research, austenitic steels were considered. From a ballistic resistance point of view, ceramic materials were also prevalent. In the case of that type of material (mostly alumina-Al2O3), AM is possible. Jones et al. [22] performed mechanical and ballistic investigations of AM alumina-based armour plates. Unfortunately, their research indicated weaknesses of AM materials made using direct-ink writing technology (DIW), characterised by decreased ballistic properties by 45% as compared to isostatic pressed (IP) equivalent material. Similar research was conducted by Appleby-Thomas et al. [23], where the authors found that AM ceramic material was slightly less effective than that conventionally made. During the literature review, it was difficult to find research papers connected with the ballistic performance of steel parts obtained by AM. It is well known that those materials are commonly used in armoured parts production [24][25][26][27]. One example is maraging steels, which could be used in that type of solution. Kaiser et al. [28] found that that type of alloy met the acceptance standards of military specifications for First Article Certification and was certified for use on USA produced armoured systems. Regarding the usefulness of maraging steels in armour application and AM possibilities from a geometrical complexity point of view, ballistic tests were carried out. Additionally, it is well-known that maraging steels are amenable to heat treatment (solution annealing and ageing). Hence, two types of material were tested before and after heat treatment. Regarding our own previous research [29][30][31][32][33][34][35], to better understand AM material behaviour, it is essential to perform structural analysis of the material with properly described fractures. This approach is suggested in this paper, which will be helpful in understanding the effects of solution annealing and ageing in AM plates on tensile-deformation and ballistic-testing properties. Material The material used for samples manufacturing was gas atomized M300 maraging steel (Carpenter Additive, Philadelphia, PA, USA). Our own scanning electron microscopy (SEM) observations ( Figure 1) indicated that powder particles were characterised by spherical shapes (diameters of 20-63 μm) with some satellites on the external surface. Material chemical composition based on the producer's quality control measures is shown in Table 1. The Manufacturing Process The SLM 125 HL (SLM Solutions, Lubeck, Germany), with a 400 W laser source, was used for samples manufacturing. Regarding the simple sample shape, its geometry was prepared directly in the software for process preparation-Magics (Materialise, Leuven, Belgium, version 19). An AM process was carried out under typical conditions in laser powder bed fusion (L-PBF) in an argon atmosphere, where the oxygen content was below 0.1%. We used the process parameters listed below to produce samples: • Laser power: 300 W • Exposure velocity: 720 mm/s • Hatching distance: 0.12 mm • Layer thickness: 0.05mm The above mentioned parameters (hatching distance and layer thickness) are shown in Figure 2. The usage of the parameters, as mentioned earlier, values generated energy density at the level of 69.44 J/mm 3 , which is consistent with the equation: where: L P -laser power (W); e v -exposure velocity (mm/s); h d -hatching distance (mm); l t -layer thickness (mm). Material chemical composition based on the producer's quality control measures is shown in Table 1. The Manufacturing Process The SLM 125 HL (SLM Solutions, Lubeck, Germany), with a 400 W laser source, was used for samples manufacturing. Regarding the simple sample shape, its geometry was prepared directly in the software for process preparation-Magics (Materialise, Leuven, Belgium, version 19). An AM process was carried out under typical conditions in laser powder bed fusion (L-PBF) in an argon atmosphere, where the oxygen content was below 0.1%. We used the process parameters listed below to produce samples:  Laser power: 300 W  Exposure velocity: 720 mm/s  Hatching distance: 0.12 mm  Layer thickness: 0.05mm The above mentioned parameters (hatching distance and layer thickness) are shown in Figure 2. The usage of the parameters, as mentioned earlier, values generated energy density at the level of 69.44 J/mm 3 , which is consistent with the equation: where: LP-laser power (W); ev-exposure velocity (mm/s); hd-hatching distance (mm); lt-layer thickness (mm). For ballistic tests, the samples had a cuboidal shape in the dimension of 110 mm × 110 mm × 6.5 mm (L × W × H). The height of the sample was selected to allow part penetration during the STANAG 4569 level I test (5.56 mm × 45 mm NATO projectile (SS109) at 30 meters with a velocity of 910 m/s) and a lack of penetration after a 20% projectile velocity decrease. This approach was suggested to better to understand material behaviour during dynamic loading in ballistic testing. The plate sample, shown in Figure 3, was made in such a way to allow for ballistic testing in a perpendicular direction according to the substrate plate surface (through the deposited material's layers). Materials 2021, 14, x For ballistic tests, the samples had a cuboidal shape in the dimension 110 mm × 6.5 mm (L × W × H). The height of the sample was selected to allo tration during the STANAG 4569 level I test (5.56 mm × 45 mm NATO proj at 30 meters with a velocity of 910 m/s) and a lack of penetration after a 2 velocity decrease. This approach was suggested to better to understand ma iour during dynamic loading in ballistic testing. The plate sample, shown in made in such a way to allow for ballistic testing in a perpendicular direction the substrate plate surface (through the deposited material's layers). Visible cylindrical undercuts on the panel's edges were necessary to al removal after the process was finished. Half of the manufactured parts were heat-treated using a Nabertherm ing furnace (Nabertherm GmbH, Lilienthal, Germany). The process was div stages: solution annealing performed under a temperature of 820 °C for one cooling, and ageing under a temperature of 480 °C with air cooling. Description of the Testing Methodology The ballistic tests were carried out using of the experimental stand pres ure 4. The stand consisted of a barrel launching system, calibre 5.56 mm (Fi a specimen mount (Figure 4a) covered by a protective box with a bullet ca 4d). To provide accurate values of impact velocity and to conduct the obser minal ballistics effects, a set of three devices were applied: an electromagne muzzle velocity measurement, EMG-1 (Figure 4f [36], the expected value of the projectile velocity was eq 6) m/s regarding the specimen distance for the under-investigation ammun plied barrel length. Accounting for the required 20% decrease of impact ve Visible cylindrical undercuts on the panel's edges were necessary to allow substrate removal after the process was finished. Half of the manufactured parts were heat-treated using a Nabertherm P300 annealing furnace (Nabertherm GmbH, Lilienthal, Germany). The process was divided into two stages: solution annealing performed under a temperature of 820 • C for one hour with air cooling, and ageing under a temperature of 480 • C with air cooling. Description of the Testing Methodology The ballistic tests were carried out using of the experimental stand presented in Figure 4. The stand consisted of a barrel launching system, calibre 5.56 mm (Figure 4g), and a specimen mount (Figure 4a) covered by a protective box with a bullet catcher ( Figure 4d). To provide accurate values of impact velocity and to conduct the observation of terminal ballistics effects, a set of three devices were applied: an electromagnetic device for muzzle velocity measurement, EMG-1 (Figure 4f), an intelligent light screen, Kistler 2521A ( Figure 4e) (Kistler, Switzerland), for bullet velocity measurement, and a highspeed camera, Phantom v1612 ( Figure 4b) (Vision Research, Wayne, NJ, USA). Tests were conducted with 5.56 mm × 45 mm NATO M855 (SS109) steel core bullets. Following the results presented in [36], the expected value of the projectile velocity was equal to (935 ± 6) m/s regarding the specimen distance for the under-investigation ammunition and applied barrel length. Accounting for the required 20% decrease of impact velocity for the second conditions set, the round propellant mass was adjusted to obtain the velocity of approx. 750 m/s. A Keyence VHX 7000 optical microscope (Keyence International, Mechelen, Belgium) was used for surface structural and fractures analysis. The material was mounted in resin, ground, polished, and etched to allow microstructural observations. Additional studies for determining powder grain shape and some microfracture phenomena were completed using a Jeol JSM-6610 SEM (JEOL Ltd., Tokyo, Japan) scanning electron microscope (SEM). To better describe material behaviour, tensile testing of samples was used. The tensile tests were carried out via the Instron 8802 (Instron, Norwood, MA, USA) testing machine. During those tests, digital image correlation (DIC) of the deformation process was made. DIC analyses, a non-contact optical technique to measure three-dimensional (3D) deformations, were made using Dantec Dynamics (Dantec, Ulm, Germany). The tensile testing rig with the DIC system is shown in Figure 5. A Keyence VHX 7000 optical microscope (Keyence International, Mechelen, Belgium) was used for surface structural and fractures analysis. The material was mounted in resin, ground, polished, and etched to allow microstructural observations. Additional studies for determining powder grain shape and some microfracture phenomena were completed using a Jeol JSM-6610 SEM (JEOL Ltd., Tokyo, Japan) scanning electron microscope (SEM). To better describe material behaviour, tensile testing of samples was used. The tensile tests were carried out via the Instron 8802 (Instron, Norwood, MA, USA) testing machine. During those tests, digital image correlation (DIC) of the deformation process was made. DIC analyses, a non-contact optical technique to measure three-dimensional (3D) deformations, were made using Dantec Dynamics (Dantec, Ulm, Germany). The tensile testing rig with the DIC system is shown in Figure 5. To check how the material's properties were changed after heat treatment tests were conducted on samples for which geometry was designed based on the E466 96 standard ( Figure 6). For each material condition, five samples were tested To check how the material's properties were changed after heat treatment, tensile tests were conducted on samples for which geometry was designed based on the ASTM E466 96 standard ( Figure 6). For each material condition, five samples were tested. To check how the material's properties were changed after heat treatment, tensile tests were conducted on samples for which geometry was designed based on the ASTM E466 96 standard ( Figure 6). For each material condition, five samples were tested. Tensile and Structural Analysis Structural and tensile analysis was necessary to justify the character of material behaviour after a deterioration in different conditions: before heat treatment (BHT), after heat treatment (AHT), and after the impact of a projectile with different velocities. Additionally, this type of analysis was helpful for further fracture description. The images of the material microstructure in each condition are shown in Figure 7. Z-direction, visible in Figure 7, is a direction of the material's layers growth during AM process. Tensile and Structural Analysis Structural and tensile analysis was necessary to justify the character of material behaviour after a deterioration in different conditions: before heat treatment (BHT), after heat treatment (AHT), and after the impact of a projectile with different velocities. Additionally, this type of analysis was helpful for further fracture description. The images of the material microstructure in each condition are shown in Figure 7. Z-direction, visible in Figure 7, is a direction of the material's layers growth during AM process. The material condition after the AM process is characterised by a typical, layered structure with visible molten pools [37,38]. The microstructure is very fine and regular in each molten pool, as is shown in Figure 8. Heat treatment resulted in a more homogeneous and fine-grained structure. It should be noted that the martensite needles were short and dark, which contributed either to an increase in hardness or a decreased material plasticity [39,40]. Tensile testing results for each sample group are shown in Figure 9. After heat treatment, tensile strength increased by almost 45% but, at the same time, elongation decreased The material condition after the AM process is characterised by a typical, layered structure with visible molten pools [37,38]. The microstructure is very fine and regular in each molten pool, as is shown in Figure 8. Tensile and Structural Analysis Structural and tensile analysis was necessary to justify the character of m haviour after a deterioration in different conditions: before heat treatment (B heat treatment (AHT), and after the impact of a projectile with different velocit tionally, this type of analysis was helpful for further fracture description. The images of the material microstructure in each condition are shown in Z-direction, visible in Figure 7, is a direction of the material's layers growth d process. The material condition after the AM process is characterised by a typica structure with visible molten pools [37,38]. The microstructure is very fine and each molten pool, as is shown in Figure 8. Heat treatment resulted in a more homogeneous and fine-grained structure be noted that the martensite needles were short and dark, which contributed ei increase in hardness or a decreased material plasticity [39,40]. Tensile testing results for each sample group are shown in Figure 9. After h ment, tensile strength increased by almost 45% but, at the same time, elongation Heat treatment resulted in a more homogeneous and fine-grained structure. It should be noted that the martensite needles were short and dark, which contributed either to an increase in hardness or a decreased material plasticity [39,40]. Tensile testing results for each sample group are shown in Figure 9. After heat treatment, tensile strength increased by almost 45% but, at the same time, elongation decreased by 67%, which significantly changed material properties, especially regarding its behaviour during dynamic loading such as ballistics testing. by 67%, which significantly changed material properties, especially regarding its behaviour during dynamic loading such as ballistics testing. Based on the obtained tensile tests, all characteristic values were measured, calculated, and presented in Table 2. Values shown in the Table, as mentioned earlier, are averages with standard deviation, which was calculated based on five measurements for each sample group. More advanced tensile analysis of the samples was possible by using a DIC system. To adequately describe material behaviour during tensile testing, images at a certain level were taken ( Figure 10) for BHT samples and for ( Figure 11) AHT samples. Based on the obtained tensile tests, all characteristic values were measured, calculated, and presented in Table 2. Values shown in the Table, as mentioned earlier, are averages with standard deviation, which was calculated based on five measurements for each sample group. More advanced tensile analysis of the samples was possible by using a DIC system. To adequately describe material behaviour during tensile testing, images at a certain level were taken ( Figure 10) for BHT samples and for ( Figure 11) AHT samples. In the conventional yield point, the distribution of strain fields was very even. The sample showed no defect connected with the manufacturing process. In the case of material strain at the ultimate tensile strength point, the areas of maximum strain were marked with oval lines. There was a banding effect visible, which is evenly distributed. Before sample destruction (Figure 10d), a significant deformation at the level of 5% revealed areas of potential sources of cracking. Their location and number might relate to part porosity, which is visible in Figure 7. Considering the DIC image after fracture (Figure 10e), deformation fields are visible despite the lack of load. The white-dotted line (Figure 10e) indicates relaxation by the fracture, which appeared as a strain banding. The DIC images captured during tensile tests of samples in the AHT condition are shown in Figure 11. To maintain consistency, the form of the obtained results is presented the same as in Figure 10. In the conventional yield point, there were visible small areas with increased strain value. This phenomenon could be related to the increased porosity caused by additional heat treatment [32]. The distribution of that area (marked using black arrows) is randomly placed on the whole sample's surface. In the case of the DIC image at the UTS point, there were no zones of constant strain. Only small areas of increased strain values were visible (Figure 11, dashed line). After exceeding the UTS point, no necking formation was visible, which indicates the brittleness and low plasticity of the material. In the last part of the material fracture, the cracking process was very dynamic, which caused the scale in Figure 10e to go out of the range. In the conventional yield point, the distribution of strain fields was very even. The sample showed no defect connected with the manufacturing process. In the case of material strain at the ultimate tensile strength point, the areas of maximum strain were marked with oval lines. There was a banding effect visible, which is evenly distributed. Before sample destruction (Figure 10d), a significant deformation at the level of 5% revealed areas of potential sources of cracking. Their location and number might relate to part porosity, which is visible in Figure 7. Considering the DIC image after fracture (Figure 10e), deformation fields are visible despite the lack of load. The white-dotted line (Figure 10e) indicates relaxation by the fracture, which appeared as a strain banding. The DIC images captured during tensile tests of samples in the AHT condition are shown in Figure 11. To maintain consistency, the form of the obtained results is presented the same as in Figure 10. In the conventional yield point, there were visible small areas with increased strain value. This phenomenon could be related to the increased porosity caused by additional heat treatment [32]. The distribution of that area (marked using black arrows) is randomly placed on the whole sample's surface. In the case of the DIC image at the UTS point, there were no zones of constant strain. Only small areas of increased strain values were visible ( Figure 11, dashed line). After exceeding the UTS point, no necking formation was visible, which indicates the brittleness and low plasticity of the material. In the last part of the material fracture, the cracking process was very dynamic, which caused the scale in Figure 10e to go out of the range. The course of the fracture line, which is complex, may be related to the irregular porosity and brittleness of the material. Ballistic Tests Results The ballistic tests were conducted under the conditions summarized in Table 3. As can be seen, for the initially applied value of impact velocity, the heat-treated samples exhibited more effective ballistic protection despite the more brittle character of specimen damage (described further below). The material behaviour during the perforation of each target plate is shown in Figure 12. Figure 12 contains ballistic test results captured by a high-speed camera, which allowed for the determination of the perforation mechanism during projectile penetration. There were many failure mechanisms for the sample in the BHT condition, which was penetrated by projectile with a velocity equal to 936 m/s (Figure 12a). The first one was the plugging mechanism, highlighted in Table 3 using a yellow arrow to show the cut-out plug. During that kind of mechanism, the fragmentation of the material appeared in the form of several pieces of material with additional reverse fragmentation (marked by a red arrow in Figure 12a) and the generation of numerous shards on both sides. Figure 12 contains ballistic test results captured by a high-speed camera, which allowed for the determination of the perforation mechanism during projectile penetration. There were many failure mechanisms for the sample in the BHT condition, which was penetrated by projectile with a velocity equal to 936 m/s (Figure 12a). The first one was the plugging mechanism, highlighted in Table 3 using a yellow arrow to show the cut-out plug. During that kind of mechanism, the fragmentation of the material appeared in the form of several pieces of material with additional reverse fragmentation (marked by a red arrow in Figure 12a) and the generation of numerous shards on both sides. Typical material fragmentation occurred for the AHT sample penetrated by a projectile with a velocity of 918 m/s (Figure 12b). A yellow arrow marked severed material parts. Particular attention should be paid to the dust formed during the penetration of this sample (it was marked with a yellow and red rectangle in Figure 12b) as this is a typical phenomenon in armour steels. Typical material fragmentation occurred for the AHT sample penetrated by a projectile with a velocity of 918 m/s (Figure 12b). A yellow arrow marked severed material parts. Particular attention should be paid to the dust formed during the penetration of this sample (it was marked with a yellow and red rectangle in Figure 12b) as this is a typical phenomenon in armour steels. In the case of a plate in BHT condition, penetrated by a projectile with a velocity of 746 m/s, there was no perforation registered. In this case, a dust cloud had formed on the side of the projectile impact. Several small parts of the material were also spotted (Figure 12c Step II, red oval). This proves the plasticity of the material and good dispersion of the impact energy compared with plates in the AHT condition. Regarding the last sample in AHT condition penetrated by a projectile with a velocity of 743 m/s, there was also no perforation. As in the previous case, a dust cloud was generated, but with more significant dimensions. Additionally, a greater number of tiny debris were noticed (Figure 12d, red ovals). Their increased number could be caused by a higher hardness of the material and its internal cracking. Fracture Analysis After ballistic testing, all types of fractures were considered. Structural analysis was necessary to determine the character of material deterioration in different conditions: BHT, AHT, and after the impact of a projectile with different velocities. Results in each condition are shown in Table 4. Table 4. Material structure before (BHT), after (AHT) heat treatment, and after deterioration with different velocities. 936 Materials 2021, 14, x 12 of 16 12c Step II, red oval). This proves the plasticity of the material and good dispersion of the impact energy compared with plates in the AHT condition. Regarding the last sample in AHT condition penetrated by a projectile with a velocity of 743 m/s, there was also no perforation. As in the previous case, a dust cloud was generated, but with more significant dimensions. Additionally, a greater number of tiny debris were noticed (Figure 12d, red ovals). Their increased number could be caused by a higher hardness of the material and its internal cracking. Fracture Analysis After ballistic testing, all types of fractures were considered. Structural analysis was necessary to determine the character of material deterioration in different conditions: BHT, AHT, and after the impact of a projectile with different velocities. Results in each condition are shown in Table 4. Table 4, all samples were characterised by a significant porosity (above 1%) which is a very undesirable phenomenon. The occurrence of several such pores relates to the high temperature of the previously exposed layer, which caused key-hole porosity generation after delivering into the material volume a significant amount of energy [41,42]. It should be noted that process parameters used in the process had default values for M300 steel, but the melted material volume was too large to draw the proper amount of heat. As shown in In non-perforated samples (Table 4, no. 3 and 4, marked by blue ovals), a part of the projectile was connected with the plate during the impact. During DIC analysis, it was suggested that porosity could be a source of material cracking, visible in the plate after projectile impact with 743 m/s velocities in AHT condition (Table 4,No.4). Cracks in the material, which appeared after the projectile impact, went through the pores. That phenomenon is hazardous because cracking through the porosity, with an increased brittleness of the material, would cause significant material fragmentation directly after impact. In the case of perforated plates penetrated by a projectile with a higher velocity (920 m/s), additional macroscopic analysis was completed as shown in Figure 13. 743 As shown in Table 4, all samples were characterised by a significant porosity (above 1%) which is a very undesirable phenomenon. The occurrence of several such pores relates to the high temperature of the previously exposed layer, which caused key-hole porosity generation after delivering into the material volume a significant amount of energy [41,42]. It should be noted that process parameters used in the process had default values for M300 steel, but the melted material volume was too large to draw the proper amount of heat. In non-perforated samples (Table 4, no. 3 and 4, marked by blue ovals), a part of the projectile was connected with the plate during the impact. During DIC analysis, it was suggested that porosity could be a source of material cracking, visible in the plate after projectile impact with 743 m/s velocities in AHT condition ( Table 4, No.4). Cracks in the material, which appeared after the projectile impact, went through the pores. That phenomenon is hazardous because cracking through the porosity, with an increased brittleness of the material, would cause significant material fragmentation directly after impact. In the case of perforated plates penetrated by a projectile with a higher velocity (920 m/s), additional macroscopic analysis was completed as shown in Figure 13. In both cases (BHT and AHT plates), many features were visible and characteristic of high-hardness material perforations. Comparing both cases, the material plate in the BHT condition seems to be more plastic than the AHT plates. The projectile's lead cap caused a slight deformation on the entry side and left a small amount of its material on the external perforation surface. On the other side, a plug with a diameter like the perforation size was pushed out by the projectile. The surface of the perforation, shown in Figure 14 (areas a), was characterised by a significant deformation, which indicates the material's ductility in that area. This phenomenon might cause a local increase in temperature during perforation due to friction between the surface of the projectile and the panel. That kind of approach was suggested by Kristoffersen et al. [9]. However, this issue requires more detailed research and will be a topic for further research. Additionally, there were cracks visible around the contact of the projectile and the plate, which indicates a material tendency for delamination (samples were manufactured layer-by-layer and perpendicular to the surface of the layer was the projectile impact) after such dynamic loading. a), was characterised by a significant deformation, which indicates the material's ducti in that area. This phenomenon might cause a local increase in temperature during per ration due to friction between the surface of the projectile and the panel. That kind approach was suggested by Kristoffersen et al. [9]. However, this issue requires more tailed research and will be a topic for further research. Additionally, there were cra visible around the contact of the projectile and the plate, which indicates a material t dency for delamination (samples were manufactured layer-by-layer and perpendicula the surface of the layer was the projectile impact) after such dynamic loading. In the case of the ATH panel, the perforation channel on the projectile entrance s had an irregular shape due to the brittle tearing of the material which appeared as sh and straight crack paths. Further penetration mechanisms caused a radial fracture a detachment of a large material part several times larger area than the projectile's cro sectional area. That phenomenon also caused material fragmentation. However, the fr mentation of the material does not indicate its ballistic resistance as it is only a result of low ductility. In the cross-section of the perforation channel, a horizontal border was v ible ( Figure 14, areas b) from which the radial tearing of the material began. It indica an interlayer cracking mechanism which may be caused by the layered structure of AM material. Conclusions Obtained research results were helpful to understand AM M300 steel behaviour d ing ballistic testing conditions. A broad scope of the research with material conditions BHT, AHT, and two projectile velocity levels, allowed us to draw the following conc sions: 1. Despite the high UTS of M300 steel, it was possible to reduce projectile velocity o by 25% after perforation, with total material penetration; 2. Solution and ageing annealing allowed for significantly increased ballistic propert The same plate after heat treatment reduced a projectile velocity by 87%, unfor nately also with penetration. Additionally, heat treatment caused an increase in m terial fragmentation after impact; In the case of the ATH panel, the perforation channel on the projectile entrance side had an irregular shape due to the brittle tearing of the material which appeared as short and straight crack paths. Further penetration mechanisms caused a radial fracture and detachment of a large material part several times larger area than the projectile's crosssectional area. That phenomenon also caused material fragmentation. However, the fragmentation of the material does not indicate its ballistic resistance as it is only a result of its low ductility. In the cross-section of the perforation channel, a horizontal border was visible ( Figure 14, areas b) from which the radial tearing of the material began. It indicates an interlayer cracking mechanism which may be caused by the layered structure of the AM material. Conclusions Obtained research results were helpful to understand AM M300 steel behaviour during ballistic testing conditions. A broad scope of the research with material conditions of BHT, AHT, and two projectile velocity levels, allowed us to draw the following conclusions: 1. Despite the high UTS of M300 steel, it was possible to reduce projectile velocity only by 25% after perforation, with total material penetration; 2. Solution and ageing annealing allowed for significantly increased ballistic properties. The same plate after heat treatment reduced a projectile velocity by 87%, unfortunately also with penetration. Additionally, heat treatment caused an increase in material fragmentation after impact; 3. The source of material deterioration was twofold. On the one hand, it was connected with high material brittleness (especially after heat treatment), and on the other hand, it was connected with increased porosity of the tested plates; 4. To reduce porosity in parts characterized by significant volume, some experimental process parameters selection, thermodynamical modelling of the exact geometry manufacturing, or hot isostatic pressing (HIP) annealing could be used; 5. Despite the revealed weaknesses of the AM M300 maraging steel, this kind of material could be useful in some armour solutions, especially for the production of complex shapes of armoured parts. However, to properly manufacture each part, some pre-or post-process activities are necessary.
v3-fos-license
2016-06-17T11:12:28.000Z
2014-08-28T00:00:00.000
14937505
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2014.00261/pdf", "pdf_hash": "607b220cf4a6534b1a7453516d26fbbc08211810", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41627", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "607b220cf4a6534b1a7453516d26fbbc08211810", "year": 2014 }
pes2o/s2orc
Dimerisation of the Drosophila odorant coreceptor Orco Odorant receptors (ORs) detect volatile molecules and transform this external information into an intracellular signal. Insect ORs are heteromers composed of two seven transmembrane proteins, an odor-specific OrX and a coreceptor (Orco) protein. These ORs form ligand gated cation channels that conduct also calcium. The sensitivity of the ORs is regulated by intracellular signaling cascades. Heterologously expressed Orco proteins form also non-selective cation channels that cannot be activated by odors but by synthetic agonists such as VUAA1. The stoichiometry of OR or Orco channels is unknown. In this study we engineered the simplest oligomeric construct, the Orco dimer (Orco di) and investigated its functional properties. Two Orco proteins were coupled via a 1-transmembrane protein to grant for proper orientation of both parts. The Orco di construct and Orco wild type (Orco wt) proteins were stably expressed in CHO (Chinese Hamster Ovary) cells. Their functional properties were investigated and compared by performing calcium imaging and patch clamp experiments. With calcium imaging experiments using allosteric agonist VUAA1 we demonstrate that the Orco di construct—similar to Orco wt—forms functional calcium conducting ion channel. This was supported by patch clamp experiments. The function of Orco di was seen to be modulated by CaM in a similar manner as the function of Orco wt. In addition, Orco di interacts with the OrX protein, Or22a. The properties of this complex are comparable to Or22a/Orco wt couples. Taken together, the properties of the Orco di construct are similar to those of channels formed by Orco wt proteins. Our results are thus compatible with the view that Orco wt channels are dimeric assemblies. Orco has a chaperone function as it supports the dendritic localization of OrX proteins (Larsson et al., 2004), and it contributes to the OR ion channel pore formation (Nichols et al., 2011;Pask et al., 2011;Nakagawa et al., 2012). For a recent report on Orco function see Stengl and Funk (2013). In the absence of OrX proteins, Orco forms a homomeric ion channel (Wicher et al., 2008;Jones et al., 2011). A FRET study demonstrated homodimeric and heterodimeric interactions between Orco and OrX proteins (German et al., 2013). It is, however, not excluded that dimers may dimerise to form tetramers as observed for orai1 channels (Penna et al., 2008). Whether Orco channels form dimers as the heteromeric OR channels remains elusive. Channelrhodopsin (ChR) is another type of a seven transmembrane domain protein that acts as ion channel. Protein crystallization revealed a dimeric structure (Müller et al., 2011;Kato et al., 2012). The conductance pathway of ChR2 is located at the dimer interface with transmembrane helices 3 and 4 (Müller et al., 2011). Dimerisation has been observed for GPCR proteins, either as homomeric interaction of muscarinic acetylcholine receptors or as heteromeric coupling of GABA B receptors (Wicher, 2010). Artificial homo as well as heterodimerization of GPCR proteins was performed by linking the C terminus of one protein to the N terminus of the other, spaced by a membrane spanning linker leading to functional constructs that were well expressed in heterologous cells (Terpager et al., 2009). In the present study we ask whether a dimeric Orco construct would display channel properties and if so, whether these properties differ from those of Orco wild type (Orco wt) channels. For this purpose we engineered an Orco dimer (Orco di) and expressed it in Chinese hamster ovary (CHO) cells. By means of calcium imaging and patch clamp experiments we show that the dimeric construct displays similar properties as Orco wt channels. SYNTHETIC DIMER CONSTRUCT The Orco dimer construct (Orco di, 3.6 kb) was generated by fusion of two Drosophila melanogaster Orco subunits as a single open reading frame into pcDNA3.1(-) mammalian expression vector (Invitrogen). To grant a correct orientation of the seven transmembrane domains of each of two Orco subunits, they were coupled with a 177 amino acid long 1transmembrane protein human sodium channel, type I, beta subunit (SCN1B) (NM_001037.4). SCN1B was synthesized by Eurofins MWG Operon and cloned into Topo pcR2.1 vector (Invitrogen). The oligonucleotides used for generating Orco dimer contained XhoI/SacI restriction sites: Orco F 5 -GAT CTC GAG CTA TGA CAA CCT CGA TGC AGC C-3 Orco R 5 -CGA GCT CTT TCT TGA GCT GCA CCA GCA CCA TAA AGT AGG T-3 or NotI/ HindIII restriction sites: Orco F 5 -TTG CGG CCG CCT ATG ACA ACC TCG ATG CAG CCG AGC -3 Orco R 5 -TCG AAG CTT GTT ACT TGA GCT GCA CCA GCA -3 . The PCR products were T: A cloned into Topo vector separately (Invitrogen). The two Orco units were then subcloned into pcDNA3.1(-) vector containing SCN1B. All the sequence analysis was done via double strand DNA sequencing at Eurofins MWG Operon. Sequence congruence was 100%. CELL CULTURE AND CALCIUM IMAGING CHO cell lines stably expressing Orco wt and Orco di were produced by Trenzyme Life Science Services (Konstanz, Germany) and grown in cytobox™ CHO select medium containing puromycin (Cytobox UG, Konstanz, Germany). The cells were grown on poly-L-lysine (0.01%, Sigma-Aldrich) coated coverslips. The culture conditions and transient transfection protocol for the coexpression of Or22a were done as described by Wicher et al. (2008). Cells for imaging were loaded with fura-2 by incubation in 1 ml CHO select medium containing 5 µM fura-2/acetomethylester (Molecular Probes, Invitrogen) for 30 min. Excitation of fura-2 at 340 and 380 nm was performed with a monochromator (Polychrome V, T.I.L.L. Photonics, Gräfelfing, Germany) coupled via an epifluorescence condenser into an Axioskop FS microscope (Carl Zeiss, Jena, Germany) with a water immersion objective (LUMPFL 40xW/IR/0.8; Olympus, Hamburg, Germany). Emitted light was separated by a 400-nm dichroic mirror and filtered with a 420-nm long-pass filter. Free intracellular Ca 2+ concentration ([Ca 2+ ] i ) was calculated according to the equation [Ca 2+ ] i = K eff (R−R min )/ (R max −R). K eff , R min and R max were determined as mentioned in Mukunda et al. (2014). Fluorescence images were acquired using a cooled CCD camera controlled by TILLVision 4.0 software (T.I.L.L. Photonics). The resolution was 640 × 480 pixels in a frame of 175 × 130 µm (40x/IR/0.8 objective). Image pairs were obtained by excitation for 150 ms at 340 nm and 380 nm; background fluorescence was subtracted. CHO cells were stimulated using VUAA1 and ethyl hexanoate via pipette. WESTERN BLOT We used ab65400 plasma membrane protein extraction kit (Abcam, Cambridge, UK) for extraction of CHO cells expressing Orco wt or Orco di or CHO-K1 (no Orco). For each sample, a cell pellet (1 g wet weight, culture density of ∼8-9 × 10 6 ) was collected by centrifugation. Equal loads of whole protein extracts were separated on 7.5% SDS-page gel and then electrophoretically transferred on to a PVDF membrane (Invitrogen). The membrane was then blocked in 5% non-fat dry milk, in TBS-T (20 mM Tris-HCl, 150 mM NaCl, 0.1% Tween, pH 7.6) for 1 h at room temperature. The membrane was subsequently incubated with primary polyclonal antibody 1:5000 against Orco (kindly provided by Leslie Vosshall) in 2.5% non-fat dry milk in TBS-T overnight at 4 • C. The membrane was further washed with TBS-T and incubated with HRP linked secondary antibody 1:10000 for 1 h at room temperature. After washing the membrane in TBS-T the proteins were detected using ECL western blotting detection kit (Signal Fire™ Elite, Danvers, MA, USA). Densitometry of bands was performed using Image J package. 1 DATA ANALYSIS The transmembrane domain prediction was performed by TTHMM server v.2.0 (CBS, Denmark) and TopPred 0.01-Topology prediction of membrane proteins (Mobyle@Pasteur, France). For electrophysiology the analysis software IgorPro (WaveMetrics, Lake Oswego, OR, USA) was used. Statistical analysis was performed in Prism 4 software (GraphPad Software, Inc., La Jolla, CA, USA). All data represent mean ± SEM. RESULTS Orco di was generated by fusing two Orco proteins of Drosophila melanogaster into a single open reading frame and subsequent cloning into a pcDNA3.1(-) mammalian expression vector ( Figure 1A). To grant an equal orientation of each of the seven transmembrane proteins, we coupled the single transmembrane human sodium channel beta subunit SCN1B between the two Orco subunits. Orco wt and Orco di were stably expressed in CHO cells. To confirm that Orco wt and Orco di proteins are expressed on the membrane we extracted protein from the cells (see Section Materials and Methods) and performed a western blot. We obtained a band of the expected ∼50 kDa size corresponding to Orco wt and one of the expected ∼100 kDa for Orco di (Figure 1B) which notably, showed a significantly lower expression level compared to Orco wt ( Figure 1B). The CHO-K1 cell membrane extract with no Orco showed no bands. To study and to compare their functional properties we performed calcium imaging experiments using the ratiometric dye fura-2. Stimulation of the receptors with the synthetic Orco agonist VUAA1 led to rapid increases in free intracellular Ca 2+ concentration [Ca 2+ ] i for both Orco wt and di expressing cells (Figure 2A). A higher maximum increase observed in Orco wt ( Figure 2B) may result from a more pronounced functional expression of the protein within the plasma membrane (see Figure 1B). For both Orco wt and Orco di the responses to VUAA1 terminated within 50 s, and the time constant τ , of the decay in [Ca 2+ ] i was not significantly different ( Figure 2C). To demonstrate that the observed Ca 2+ signals resulted from Ca 2+ influx into the cells, we stimulated Orco di expressing cells in a Ca 2+ -free bath solution. Under these conditions [Ca 2+ ] i remained constant (Figure 2D). The presence of ruthenium red (RR) which has been previously shown to inhibit insect OR's (Nakagawa et al., 2005;Sato et al., 2008;Jones et al., 2011;Nichols et al., 2011) also abolished any Ca 2+ signal upon VUAA1 application ( Figure 2E). These observations are in line with previous findings in Orco wt expressing cells (Mukunda et al., 2014) and indicate that Orco di forms RR-sensitive Ca 2+ permeable ion channels. Heterologously expressed Orco proteins show constitutive channel activity leading to enhanced resting [Ca 2+ ] i (Wicher et al., 2008). Compared with non-transfected cells the Ca 2+ resting levels in cells expressing Orco wt and Orco di appeared to be significantly enhanced, at comparable levels ( Figure 2F). Thus Orco di channels seem to show a similar constitutive activity as the Orco wt channels. To compare the transmembrane currents conducted by Orco wt and Orco di we performed patch clamp experiments using the whole cell configuration. While receptor stimulation with VUAA1 in the non-invasive calcium imaging approach induced robust and reproducible responses, it appeared to be less efficient in the patch clamp recordings. Also for Drosophila Orco expression in Xenopus oocytes the used VUAA1 concentration of 100 µM was just above the threshold and below the EC 50 of 190 µM (Chen and Luetje, 2012). Among the VUAA1-related OR agonists OLC12 is more potent as shown by Chen and Luetje (2012). For Drosophila Orco these authors report an EC 50 of 35 µM. A comprehensive structure-activity relationship analysis of VUAA1 derivatives is presented by Taylor et al. (2012). OLC12 induced transient inward currents in Orco wt and Orco di expressing cells (Figures 3A,B). The currents induced by OLC12 had similar amplitude in Orco wt (mean 220 pA) and Orco di (mean 140 pA) expressing cells ( Figure 3C). The current decay was slower for Orco di (τ = 14 ± 2.9 s) compared to Orco wt (τ = 7.6 ± 0.7 s) ( Figure 3D) which indicates a slower closure of the dimer channels. In conclusion, the patch clamp measurements demonstrate that Orco di gives rise to a membrane current upon agonist stimulation. In a previous study (Mukunda et al., 2014), we have seen that calmodulin (CaM) modulates Orco channel activity. In order to check if this regulation is conserved in Orco di we stimulated cells expressing Orco di with VUAA1 in the presence of CaM inhibitor W7 (Figure 4A). Application of W7 reduced the calcium responses to VUAA1 stimulation ( Figure 4B) and significantly increased the decay time constant of the Ca 2+ response ( Figure 4C). These effects are in line with the results obtained for Orco wt (Mukunda et al., 2014), and demonstrate conservation of the CaM regulation in the dimeric Orco construct. In heterologous expression systems Orco forms heterodimeric complexes with ligand binding OrX proteins (Neuhaus et al., 2005;Benton et al., 2006). To test whether Orco di interacts with a ligand binding odorant receptor OrX, we next coexpressed Or22a in Orco wt and in Orco di expressing cells and stimulated them with VUAA1 and an Or22a ligand, ethyl hexanoate (Figures 5A,B) signals obtained from cells coexpressing Or22a displayed a slower decay than cells solely expressing Orco, as previously observed (Mukunda et al., 2014). The amplitude of [Ca 2+ ] i in Or22a/wt expressing cells was larger than in cells expressing Or22a/di (Figures 5A,C). When stimulated with ethyl hexanoate the amplitudes of Ca 2+ signals were similar for Orco wt and Orco di expressing cells (Figures 5B,D). The calcium measurements with coexpression of Or22a show that Orco di interacts with OrX proteins to form a functional OR channel. The similar size of odor-induced signals indicates a similar level of functional ORs generated with Orco wt and di, respectively. DISCUSSION Orco is an integral part of insect ORs and is required for the correct insertion of OrX proteins in the dendritic membrane of the receptor neurons (Larsson et al., 2004). In addition to forming heteromers with OrX proteins with an as yet unknown stoichiometry there is also evidence that Orco may build homomeres (Neuhaus et al., 2005;Benton et al., 2006;German et al., 2013). Reports of the purification of insect OR subunits suggest potential dimeric and quaternary structure formation between Orco and Or22a . In this study we engineered the minimal oligomeric structure of Orco and asked whether Orco di exhibits the same channel properties like Orco wt. The calcium imaging experiments with Orco di expressing cells have demonstrated a Ca 2+ influx in response to non-odor OR agonists such as VUAA1 Chen and Luetje, 2012; Figure 2). Heterologously expressed Orco proteins form Ca 2+ permeable cation channels and show constitutive activity which leads to elevated intracellular [Ca 2+ ] resting levels (Sato et al., 2008;Wicher et al., 2008;Jones et al., 2011;Sargsyan et al., 2011). The basal [Ca 2+ ] i levels of Orco di expressing cells were also enhanced as compared to non-transfected CHO cells. This indicates also a background activity of Orco di channels ( Figure 2F). The agonist stimulation produced Ca 2+ signals similar to those observed for Orco wt that were dependent on extracellular Ca 2+ . Thus, Orco di shows functional expression and forms a Ca 2+ permeable cation channel. Whole cell current measurements using patch clamp further confirm that Orco di displays ion channel activity and generates a transient inward current similar to Orco wt when activated by an appropriate ligand (Figure 3). In a recent study we showed that CaM activity affects the function of Orco channels (Mukunda et al., 2014). Stimulation of Orco wt cells in presence of the CaM inhibitor W7 showed significantly reduced and prolonged [Ca 2+ ] i responses. The Orco protein contains a conserved putative CaM binding motif ( 336 SAIKYWER 344 ) in the second intracellular loop. A point mutation in this putative CaM site (K339N) affects the Ca 2+ responses elicited by agonist stimulation. As the Orco di protein also contains the putative CaM binding motif it was expected that CaM would regulate this construct. Indeed, the responses obtained with Orco di were similar to those obtained with Orco wt in presence of W7 (Figures 4B,C), suggesting that Orco di is modulated by CaM as Orco wt. A mutational study of Bombyx pheromone receptors suggests that both constituents of olfactory receptors, Orco and OrX proteins contribute to the ion channel pore (Nakagawa et al., 2012). Also, expression of Orco alone leads to functional channels suggesting that they may dimerize which is supported by FRET experiments (German et al., 2013). Coexpression of Orco di and Or22a elicited Ca 2+ transients in response to Orco agonist VUAA1 and Or22a ligand ethyl hexanoate application as seen in cells expressing Or22a/Orco wt. This suggests an interaction of OrX protein here represented by Or22a (Figure 5). The construction of concatameric GPCR dimers has raised the question whether they would form a functional dimer composed of the two coupled subunits or interconcatameric dimers, i.e., tetramers (Terpager et al., 2009). The first alternative was expected for homomeric constructs such as the β2-adrenergic receptor, but not for a heteromeric couple of β2-adrenergic receptor and neurokinin receptor 1. A similar question arises concerning the composition of Or22a and Orco complex. There might be a tetrameric interaction between two Or22a units and the dimer. Even for Orco di a tetrameric topology cannot be excluded. In conclusion, our experiments demonstrate that the synthetic Orco di construct is functionally expressed and it forms a functional Ca 2+ -permeable cation channel. Like Orco wt, it can be activated by synthetic agonists like VUAA1 and its derivatives. Furthermore, Orco di seems to be constitutively active leading to enhanced basal [Ca 2+ ] i levels in Orco di expressing cells. Finally, our results show that Orco di is modulated by CaM in a similar way as Orco wt and it interacts with OrX proteins such as Or22a. Thus the functional properties of the Orco di construct are very similar to those of Orco wt. This result would be compatible with the assumption that Orco channels build dimeric assemblies. At presence, however, this view requires more support, for example by testing an Orco construct that is prevented to dimerize or by resolving the crystal structure of Orco complexes. An intriguing question is whether Orco di would be able to rescue the Orco function in Orco deficient fly mutants.
v3-fos-license
2024-02-24T06:17:51.501Z
2024-02-22T00:00:00.000
267809703
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41597-024-02958-1.pdf", "pdf_hash": "4df1796fb007ba57c0ff64606dadf48c2a26087e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41628", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "97228253198e8598e82ce9294d89072c08eb131e", "year": 2024 }
pes2o/s2orc
Ethnicity data resource in population-wide health records: completeness, coverage and granularity of diversity Intersectional social determinants including ethnicity are vital in health research. We curated a population-wide data resource of self-identified ethnicity data from over 60 million individuals in England primary care, linking it to hospital records. We assessed ethnicity data in terms of completeness, consistency, and granularity and found one in ten individuals do not have ethnicity information recorded in primary care. By linking to hospital records, ethnicity data were completed for 94% of individuals. By reconciling SNOMED-CT concepts and census-level categories into a consistent hierarchy, we organised more than 250 ethnicity sub-groups including and beyond “White”, “Black”, “Asian”, “Mixed” and “Other, and found them to be distributed in proportions similar to the general population. This large observational dataset presents an algorithmic hierarchy to represent self-identified ethnicity data collected across heterogeneous healthcare settings. Accurate and easily accessible ethnicity data can lead to a better understanding of population diversity, which is important to address disparities and influence policy recommendations that can translate into better, fairer health for all. Introduction Health inequity is described by disparities in health status between individuals, such as prevalence of comorbidities, life expectancy, access to and quality of care services and treatments, and risk behaviours such as smoking and alcohol consumption.These factors can be influenced by age, sex, ethnicity, disability, socio-economic status, geographical location, and education, among others 1 .For example, many of these determinants were risk factors for infection severity, complications, and mortality during the COVID-19 pandemic [2][3][4] . "Ethnicity" commonly refers to terms used to self-report an individual's own perceived ethnic group and cultural background.This multidimensional, evolving concept can comprise physical appearance, race, culture, language, religion, nationality and identity elements, and is not always captured in electronic health records.Additionally, when recorded, ethnicity is often inaccurately coded, especially for groups other than the predominant group(s) in a given population 5,6 .Ethnicity classifications also change over time, limiting comparability with population-level census data 7 .In UK health records, although there are hundreds of heterogenous ethnicity groups defined in the form of SNOMED-CT codes or Read codes, among others, they are commonly collapsed into five or six categories, in part due to power considerations where fine-grained or granular categories would have smaller sample sizes [8][9][10][11] .However, these larger groups may not be equivalent or translate across the world due to differences in population demographics 12,13 .Nonetheless, this oversimplification of categories can result in loss of diversity and precision in studies using ethnicity.Incorrect or unrepresentative ethnicity records risk introducing bias in insights drawn from health data and ensuing literature, ultimately contributing to inappropriate healthcare.Use of population-wide routinely-collected data offers an opportunity to study diverse ethnicity groups in detail with sufficient power, enabling health research to become more inclusive 4,10,11 . Health inequality was highlighted as a significant issue during the COVID-19 pandemic when individuals from ethnically diverse backgrounds in otherwise predominantly White populations were disproportionately affected by SARS-CoV-2 11 .However, this is an ongoing and multifaceted challenge; one underlying source is bias in health data and ensuing technologies.Understanding and addressing biases in health data is a fundamental first step in addressing this challenge.To improve the understanding of how ethnicity is recorded, mapped, and used in the UK, we explored ethnicity records for completeness, consistency, and granularity in National Health Service (NHS) England's Secure Data Environment (SDE) service for England (UK) accessed via the BHF Data Science Centre's CVD-COVID-UK/COVID-IMPACT Consortium 14 . Methods Data sources and linkages.NHS England maintains an SDE for secure access to anonymised patient-level electronic health records for England with linkages to primary-, secondary-, and tertiary-care data sources for research purposes 15,16 .NHS England's Master Person Service facilitates the linkage between the SDE data sources through the NHS number (a unique 10 digit healthcare identifier), date of birth, and sex 15,17 . This study focused on the General Practice Extraction Service (GPES) Data for Pandemic Planning and Research (GDPPR) data sources, a primary-care dataset for England that collects information from all individuals who are currently registered with a general practitioner (GP) practice and any individual who died on or after 1 st November 2019.GDPPR does not include individuals who died before November 2019 for ethical reasons as those individuals are considered out of scope for COVID-19 research.It is accessible through the NHS England SDE 18 (formerly NHS Digital TRE 15,16 ).Data include diagnoses, prescriptions, treatments, outcomes, vaccinations, and immunisations 19,20 .GDPPR covers 98% of English GP practices across all relevant GP computer system suppliers (TPP, EMIS, Cegedim (formerly called Vision or In Practice Systems), and Microtest) 15 . Data sets used.To evaluate and curate the ethnicity data available in the NHS England SDE 15,16 , we used the following three linked datasets: • Primary care data: the General Practice Extraction Service (GPES) Data for Pandemic Planning and Research (GDPPR) 19,20 GDPPR was used to select the individuals included in the study.It was the main source to obtain ethnicity data, and all variables included in the study except death, which was obtained from the Civil Registration of Deaths.HES-APC was used as a second source to obtain ethnicity data. Data access. A data sharing agreement issued by NHS England for the CVD-COVID-UK/COVID-IMPACT research programme (ref: DARS-NIC-381078-Y9C5K) enables accredited, approved researchers from institutions party to the agreement to access data held within the NHS England SDE service for England. Source codes for ethnicity.Ethnicity is recorded in health records using the following medical concepts (Fig. 1): • SNOMED concepts: The Systematized Nomenclature of Medicine Clinical Terms is a standardised vocabulary for the recording of patients' clinical information in electronic health records.It is used across NHS practices and healthcare providers 21 .We focused on GDPPR records containing SNOMED-CT 22 UK Edition ethnicity concepts.Any mention of SNOMED concepts in this paper directly refers to these codes.• NHS ethnicity codes: Standard ethnicity categories defined in the NHS England Data Dictionary 23 , using A-Z notation.Ethnicity fields in the NHS tables may use different census classifications, therefore NHS ethnicity code notation may differ slightly depending on which census it is based on.Table 1 summarises the NHS ethnicity codes available in GDPPR, the corresponding categories in HES-APC, and the 2011 and 2021 UK ONS census categories.Mapping of SNOMED concepts to NHS ethnicity codes was provided by NHS England (Table S1). An individual's ethnicity may be recorded using either SNOMED-CT concepts or NHS ethnicity codes in GDPPR (primary care records), whereas it may only be recorded using the latter in HES-APC (hospital records). Other ethnicity classifications. High-level ethnicity groups: Asian/Asian British, Black/African/ Caribbean/Black British, Mixed, Other Ethnic Groups, Unknown, and White.Based on ONS ethnicity group high-level category descriptions 13,24 . The algorithm used within the SDE to condense NHS ethnicity codes and SNOMED concepts to these classifications is provided in Table S2. Figure 2 shows a representation of the hierarchy between the three classification systems, and points where the aggregation is performed by the mapping provided by NHS England or the algorithm used in the SDE. Settings and participants. We studied all individuals with a unique patient pseudoidentifier in GDPPR 19,20 from 1 st Jan 1997 until 23 rd April 2022.Individuals with an invalid age (i.e., age <0 or ≥115 years old) or missing sex were excluded. Covariates.Death date was obtained through civil registration death table which is curated by the ONS and records primary and secondary causes of death using ICD-10.All additional characteristics of individuals were extracted from GDPPR data, which included: age at date of death or age on 23rd April 2022 (date of data extraction), sex, most recent record of residence (i.e., geographical region) in England, body mass index (BMI), index of multiple deprivation (IMD), current smoking status, current alcohol use status, and the presence of any clinical record of atrial fibrillation, acute myocardial infarction, chronic kidney disease, chronic obstructive pulmonary disease (COPD), heart failure, pulmonary embolism, cancer, dementia, diabetes, hypertension, liver disease, obesity, or stroke diagnosis.Geographical region is reported using England's nine official regions: London, North East, North West, Yorkshire, East Midlands, West Midlands, South East, East, and South West; which were mapped from the Lower Layer Super Output Areas (LSOA). Statistical analysis. Completeness: missing ethnicity data. To study completeness, individual-level ethnicity data were extracted from GDPPR using SNOMED concepts and/or NHS ethnicity codes, prioritising SNOMED concepts when available.For individuals with missing ethnicity data in GDPPR, we extracted HES-APC-linked ethnicity data using NHS ethnicity codes (Fig. 3). NHS ethnicity codes Standard ethnic categories in the UK National Health System: How ethnicity data are recorded: How ethnicity is typically used in research: Simplification to High-level ethnic groups Ethnicity missingness was defined as (i) no record in either GDPPR or HES-APC or (ii) an ethnicity code of "not stated" referring to individuals who were asked but preferred not to state their ethnicity and individuals who may not know what to answer.We compared the clinical characteristics of individuals in GDPPR whose ethnicity data were obtained from GDPPR, from the link to HES-APC, or were not recorded. Inconsistency: multiple records.We study the presence of multiple ethnicity codes for an individual within the GDPPR records and within the HES-APC records, and we compared the prevalence of individuals with multiple codes in GDPPR and HES-APC. To study it within the GDPPR records, the individual's SNOMED concepts were converted into the corresponding 19 NHS ethnicity code categories.For individuals who had two co-existing NHS ethnicity codes, the frequency of each co-existing pair was determined.If a patient had more than two co-existing ethnicity codes present, a count of one was added for each pairing.Inconsistency: potential discrepancies between classifications.To study potential discrepancies and misclassifications between the different ethnicity classifications, we studied the mappings between SNOMED and NHS ethnicity codes and between NHS ethnicity codes and high-level ethnicity groups. Granularity: from high-level categories to SNOMED concepts.Here "granularity" refers to the degree of detail, i.e., sub-groups within an ethnicity group.Definitions of the most recent SNOMED record in GDPPR individuals were explored.Data were prepared using Python V.3.7 and Spark SQL (V.2.4.5) on Databricks Runtime V.6.4 for Machine Learning.Data were analysed using Python in Databricks and RStudio (Professional) Version 1.3.1093.1 driven by R Version 4.0.3. Results Completeness of ethnicity data.We identified 61,810,570 individuals with unique identifiers in the GDPPR dataset on 23 rd April 2022.We excluded 403 of these individuals for invalid age or missing sex.Of the remaining, 51,135,903 (83.3%) had an ethnicity code recorded, including those whose ethnicity was recorded but as Unknown, whereas 10,674,667(16.7%) did not have any record for ethnicity.The recorded ethnicity groups included White (77.3%),Asian/Asian British (9.8%), Black/Black British (3.6%),Other Ethnic Groups (3.6%, Mixed (2.2%), and Unknown ethnicity (3.2%) (Figure S1).When linked with HES-APC, the proportion of those without any ethnicity record reduced from 16.7% to 6.1% (Fig. 4). Individuals with missing ethnicity data were generally younger, with a median age [IQR] of 35•0 [22•0, 53•0] years vs 42•0 [24•0, 61•0] years for those with ethnicity from GDPPR and 36•0 [18•0, 58•0] years for those with ethnicity linked from HES-APC.A greater proportion of those with missing ethnicity were male (58•6%) than those with ethnicity from GDPPR (48•9% male) or HES-APC (54•0% male) (Table 2).They also had fewer comorbidities (Table 3) and a greater proportion came from the South East and South West regions of England (Figure S2).Individuals aged 18-29 years had between 5•7% and 9% more missing ethnicity data than any other age group (Table 2).assessment of multiple ethnicity records.About 1•4% of individuals with an original NHS ethnicity code record and 16•0% of individuals with a converted SNOMED concept record had multiple different ethnicity codes (Table S3).Excluding the Not stated (Z) code reduced inconsistencies (to 1•2% and 10•3%, respectively).In contrast, 38•0% of individuals in GDPPR with at least one ethnicity record in HES-APC (n = 46,804,958) had multiple inconsistent records, dropping to 19•0% when the Not stated (Z) code was excluded. Ethnicity codes most frequently found in individuals with more than one reported code in GDPPR were British (A), Any other White background (C), Not stated (Z), Any other ethnic group (S), and Any other Asian background (L) (Figure S3).The most common ethnicity code combinations were British (A) -Any other White background (C), British (A) -Not stated (Z), and Any other White background (C) -Any other ethnic group (S).When White ethnicity codes (A, B, and C) were excluded, the most common pairs of minority ethnicity codes were Any other Asian background (L) -Any other ethnic group (S), African (N) -Any other Black background (P), and Indian (H) -Any other Asian background (L) (Figure S4).S1).However, only 255 (52•1%) of these codes were used at least once in the extracted individuals' records.The remaining 234 (47•8%) codes were not assigned to any individual.Figure S5 shows the five most frequently used SNOMED concepts mapped to each NHS ethnicity code.Table S4 displays the number of individuals per SNOMED concept in GDPPR. Potential discrepancies and misclassifications.Some inconsistency was found in the aggregation of NHS ethnicity categories into the high-level ethnicity groups.According to the 2011 and 2021 England and Wales census classifications, Gypsy/Irish Traveller (T) falls within the higher-level category White, and Chinese (R) within Asian.In contrast, the NHS ethnicity classification included Gypsy/Irish Traveller (T) and Chinese (R) within the higher-level category Other Ethnic Groups, following the ONS Census 2001 classification (see classification algorithm used in the CVD-COVID-UK/COVID-IMPACT Consortium in Table S2). Further discrepancies were found in the grouping of SNOMED concepts into NHS ethnicity categories.A mapping algorithm could not be traced.Given the lack of documentation on the mapping of SNOMED concepts to NHS ethnicity code, several potential discrepancies were observed which should be carefully (re) considered by researchers in future (Fig. 5): for instance, concepts including a variant of "Black East African Asian/Indo-Caribbean" were assigned to Any other Asian background (L).However, there is no clarification as to whether these concepts are mapped more accurately there than others, such as Other ethnic group or Black/ African/Caribbean/Black British.Likewise, variants of "Black West Indian" were mapped to Caribbean (M), although a proportion of these individuals may have Asian legacy.Arguably, the three 'Mixed' concepts within Any other background (G) may be better grouped in more specific categories, such as "Black -other, mixed" within Black background (P).Several concepts contained by Any other ethnic group (S) could also be placed in more specific categories.For example, the 2001 census category "Asian and Chinese" is linked to Any other ethnic Facilitating data reuse in future research.The curation process has organised ethnicity data into a hierarchical mapping.This allows the data source to be reused by future researchers using ethnicity information. The publicly available R code can be used to extract the most up-to-date ethnicity records for future research.This allows the necessary flexibility for using the data in observational research such as retrospective cohort studies, as well as potentially help with clinical trial selection such as NHS DigiTrials service. Discussion Errors in health care can impact patient care and outcomes as well as increase costs to the care system 25 and affect public trust 26 .Biased ethnicity knowledge could potentially lead to biased healthcare decision-making and to patients receiving inappropriate or no care.Correct identification of ethnicity is an essential first step to understanding inequities between ethnicities.Despite its complexity, researchers should aim to include ethnicity in their analyses.The results presented here can be used to further the use of ethnicity in future research.Among those whose ethnicity was recorded, the proportion of individuals with White Black/Black British and Mixed ethnicity were −3.7%, −0.6% and −0.7% lower, respectively, whilst Asian/Asian British and Other Ethnic groups were and 0.2% and 1.4% higher, as compared to 2021 census estimates 27 .Therefore we consider GDPPR as a representative data source for the England population. Completeness of ethnicity records.We found that over 83•3% of individuals in England's primary care system had at least one ethnicity recorded; increasing to 93•9% when linked to hospitalisation records.This result represents a greater level of completeness than reported in other routinely collected GP records 28 , and highlights the usefulness of linking data across primary and secondary care to maximise ethnicity data completeness.Individuals with missing ethnicity were younger, more likely to be male and living in the southern regions of England, and had fewer comorbidities than individuals with recorded ethnicity.It may be speculated that this group may be representative of generally healthy individuals or those otherwise not inclined to seek healthcare.In other words, most of individuals with Unknown ethnicity might not be using the health care system very often, which decreases the probability to record data.Similar results have been reported in other UK data sources.For instance, Mathur et al. observed higher rates of ethnicity records for individuals aged 40 to 79 years in the Clinical Practice Research Datalink (CPRD) and HES data sources than older or younger individuals 8 .Petersen at al. found that, among people aged 18-65 years, men were less likely to have health indicators recorded than women in the Health Improvement Network (THIN) 29 .Table 2. Comparison of individuals with and without an ethnicity record in GDPPR or linked from HES-APC: general characteristics.*Excluding individuals who refused to state their ethnicity (NHS ethnicity code was 'Z').**Group composed by individuals who refused to state their ethnicity (NHS ethnicity code was 'Z') and those whose ethnicity was not recorded.***Geographic regions reported in the table belongs to the nine official regions of England.Abbreviations: IMD, index of multiple deprivation; NHS-APC, National Health Service for admitted patient care in the UK; SD, standard deviation.such as Bangladeshi in the UK and Hispanic/Latinos in the US 30 .This work describes for the first time more than 250 patient-identified ethnicity sub-groups in England. Ethnicity data in healthcare and national statistics are captured for different purposes, partly explaining some of the differences among NHS and census ethnicity categories.For instance, GDPPR is directly intended for patient care; HES has a more administrative nature linked to payments; while the ONS collects information from the UK population, including ethnicity, in a census held approximately every 10 years, most recently in 2021 7,13 .To allow for the emergence of new ethnicity groups, the census questionnaire allows free-text answers 13 .After pooling all information, the ONS reports the groups that, in their understanding, best represent the existing diversity in the UK.Updated ethnicity groups are then shared with the NHS, which uses this information to update the ethnicity categories used in their data sources. The 2011 Census published by the ONS was the gold standard for ethnicity recording in England and Wales until the recent publication of the 2021 Census 31 .However, not all NHS sources base their categories on the same census.For example, HES-APC uses 2001 Census categories, whereas GDPPR uses 2011 census categories.This discrepancy creates uncertainties and data mismatching between different datasets.For example, HES-APC does not include the ethnicity categories Arab (W) and Gypsy/Irish Traveller (T) 32 .This highlights once more the importance of linking data across primary and secondary care, in this case, to maximise ethnicity data granularity. Multiple records and potential discrepancies.Our analysis of multiple ethnicity records within GDPPR and HES-APC sources showed relatively similar discrepancy rates when the code for "do not know/refusal" (i.e.,Not stated (Z)) was excluded (12% and 19%, respectively).However, GDPPR ethnicity data should be prioritised to ensure inclusion of Arab (W) and Gypsy/Irish Traveller (T) ethnicities, as well as to reduce the inclusion of older notations such as the "Codes from 1995-1996 to 2000-2001".Using the prioritisation algorithm from our analysis, the impact of this legacy classifications represented less than the 0.13% of individuals with an ethnicity record and less than 0.12% of all individuals registered in GDPPR.Nonetheless, we considered that using an old classification system is preferred, rather than registering it as missing ethnicity, but researchers may decide differently on a project basis. Within patient records, the most frequently coexisting codes would be placed in the same higher-order group.For example, codes for African (N) and Any other Black background (P) often appear for the same individual and would both be grouped within the high-level category Black/African/Caribbean/Black British.The use of higher-level groupings can therefore resolve some conflicting cases by reducing granularity.However, it cannot resolve conflicts where different Mixed categories coexist in the same record, such as Any Other Mixed Background (G) occurring alongside British (A), Any Other White Background (C), or African (N).Higher-level **Group composed by individuals who refused to state their ethnicity (NHS ethnicity code was 'Z') and those whose ethnicity was not recorded.***Geographic regions reported in the table belongs to the nine official regions of England.Abbreviations: COPD, chronic obstructive pulmonary disease; NHS-APC, National Health Service for admitted patient care in the UK. Mixed groupings may therefore include more ambiguous ethnicity concepts.The British (A) code had frequent conflicting pairings with Indian (H), Any other Asian background (L), and Caribbean (M), suggesting inconsistencies in individuals' perceptions of their nationality and ethnicity when self-reporting ethnicity. The grouping algorithm used can also be a source of inconsistencies.For instance, including Chinese (R) and Gypsy/Irish Traveller (T) within Other Ethnic Groups instead of the established high-level ethnicity groups might be preferred for certain studies, but should not be by default. Uncertainty regarding mapping of international SNOMED-CT ethnicity concepts to NHS ethnicity codes highlighted the need for better documentation of underlying processes.SNOMED concepts available in GDPPR data account for different, more granular ethnicity groups than NHS ethnicity codes, enabling greater diversity in ethnicity groups to be represented.The descriptions of the observed SNOMED concepts included ethnicity, race, religion, and geographic location, among others.However, many concepts require some aggregation due to their limited use within very large datasets, such as the one explored here.Most research based on NHS data uses wider categories, rather than the highly specific concepts captured by the SNOMED concepts.The large variety and complexity of ethnicity codes can make collapsing and comparing codes difficult, regardless of whether NHS ethnicity codes or high-level ethnicity groups are used.Although using these more general groupings allows researchers to achieve a minimum sample size while protecting individual identities, the cost is the uncertainty of how accurate these bigger groups are.In addition to the improvement of mapping quality, having better documentation of how the observed SNOMED concepts were defined could help in the comparison of these groups with classifications from other countries. Strengths and limitations of this study. To our knowledge, this work is the first attempt to curate and describe the full breadth and depth of patient self-identified ethnicity using more than 250 ethnicities among over 61 million individuals in England. GDPPR is a collection of de-identified person-level primary care data (linked to secondary and tertiary care) for one of the world's largest research-ready population-wide electronic health records databases and housed within a trusted research environment, NHS England's SDE service for England.This extensive observational dataset, with its large number of ethnicity groups and sub-groups, has the potential to deepen our understanding of ethnicity in health data and its potential to improve real-world evidence generation. Despite the exclusion of individuals who died before 1 st November 2019, GDPPR can provide a reliable picture of the existing ethnic diversity for studies including individuals registered in the England primary care system after November 2019, like ours.We considered GDPPR as a representative data source for the England population diversity when compared to the UK 2021 census 27 .The slight variations that may be observed, with 1.4% higher representation of Other Ethnic groups being the largest difference, may be explained by our decision not to restrict our analysis to only alive individuals like most researchers would use in their research (in other words, we include all individuals registered in GDPPR with valid inclusion criteria, including those individuals who died between November 2019 and April 2022).However, studies aiming to analyse the diversity of the population before this date may be biased and, therefore, not be representative. This study provides a first, detailed curation of ethnicity data for re/use in research.The observed findings are highly representative of the England population: in England, there was a total of 6,700 GP practices containing a total number of 60,389,925 unique NHS identifier from patients who were not deceased by 24 August 2020 34 .Of them, 6,535 GP practices containing 56,441,600 unique identifiers were included within GDPPR.Nevertheless, we do not disregard the possibility that patients registered at multiple practices with different identifiers could been counted more than once.However, the impact of this is reduced by the Master Person Service algorithm, which increases the quality of the data by matching and linking person-records within and across the different NHS sources 17 In other words, the algorithm links the different NHS identifiers from the same single patient not only within GDPPR but also across other linked datasets such as HES-APC tables, and assigns a unique anonymised identifier (named Person_ID within the NHS England SDE) that is later used by the SDE user.Further studies are required to assess its accuracy in GDPPR records. Ethnic diversity is better captured in SNOMED concepts than other existing classifications.Future observational research with study-specific sample sizes may need to consider combining smaller ethnicity categories into larger groups for study feasibility.The detailed representation of the England population described here also means the observed ethnicity groups may not be equivalent or transportable to other countries.Additionally, there is no perfect solution for conflicting codes in the same individual, especially for codes that cannot be reconciled (e.g., White, Black).Of the different available approaches, we used the most recent SNOMED concepts in an individual's record when exploring granularity in GDPPR.This approach may have affected the prevalence of very small minority groups, including the 234 codes that were not linked to any user.An alternative approach would have been to select the most frequently recorded ethnicity category, which could reduce any potential human error when entering the data into the electronic health system.However, using the most recent codes has the advantage to include any new ethnicity definitions within SNOMED, allowing us to observe a more up-to-date representation of the self-perception of ethnicity of the population. Whilst improvements to data collection at source would be welcome, much more can be done with currently available ethnicity data than is typically seen in the literature.This is important to be done now more than ever, as routinely collected ethnicity data are increasingly used in the era of real-world analytics and large-scale trials.For instance, improvements required in ethnicity mapping between classifications were identified in this paper.Additionally, this study demonstrates the importance of linking data across primary and secondary care to maximise the ascertainment, completeness, and granularity of ethnicity data, and the application of better ethnicity coding in big health data. The details of using ethnicity provided in this paper may not only help researchers to improve the representation of the population diversity in their research, but can also be used to conduct much more personalised medicine such as tailoring prognostic models to the 19 ethnicity groups.Accurate ethnicity data will lead to a better understanding of individual diversity, which will help to address disparities and influence policy recommendations that can translate into better, fairer health for all.This, in turn, shows that the effort of collecting ethnicity and using it in research is more than worthwhile.made available to accredited researchers.Those wishing to gain access to the data should contact bhfdsc@hdruk.ac.uk in the first instance. Fig. 1 Fig. 1 How ethnicity is collected in the UK and typically used for research.The A-Z letters are the nomenclature observed in the data to represent the NHS ethnicity codes.Abbreviations: High-level ethnicity groups, general ethnicity classification groups from the Office for National Statistics commonly used in research; NHS, National Health Service in the UK; SNOMED, SNOMED-CT records containing ethnicity concepts. Figure S2 maps the distinct levels of ethnicity concepts from the different data sources to one another.SNOMED currently gives the most granular ethnicity records, with 489 SNOMED concepts representing ethnicity within the NHS England SDE (Table Fig. 2 Fig. 2 Visual representation of the hierarchy between the three ethnicity classifications, from the broader to the most specific: High-level ethnicity groups, NHS ethnicity codes and SNOMED concepts.The A-Z letters are the nomenclature observed in the data to represent the NHS ethnicity codes.The colours displayed from the High-level ethnicity groups show how the NHS ethnicity concepts and SNOMED-CT can be aggregated into this 6-category classification.The 1 highlights the different colour for the letters C and T, in respect to the colours of their concepts, Chinese and Gypsy/Irish Traveller, respectively.The colours from the concepts represent the current aggregation algorithm available in the NHS England SDE, whilst the colour of the letters show the aggregation suggested by the UK Office of National Statistics.Abbreviations: *, the Unknown category is not always included; NHS, National Health Service in the UK; SNOMED, SNOMED-CT records containing ethnicity codes; SDE, Secure Data Environment. Fig. 3 Fig. 3 Decision tree of preferred source of ethnicity.Solid arrows mark the preferred option whilst dashed arrows indicate the alternative route.Abbreviations: GDPPR, General Practice Extraction Service (GPES) Data for Pandemic Planning and Research; HES-APC, hospital episode statistics; SNOMED, SNOMED-CT records containing ethnicity codes. Fig. 4 Fig. 4 Flow chart of availability of ethnicity records for individuals present in GDPPR.Abbreviations: GDPPR, General Practice Extraction Service (GPES) Data for Pandemic Planning and Research; HES-APC, hospital episode statistics; NHS, National Health Service in the UK; SNOMED, SNOMED-CT records containing ethnicity concepts; NA, not available ethnicity. Table 3 . Comparison of individuals with and without an ethnicity record in GDPPR or linked from HES-APC: clinical diagnostics.*Excluding individuals who refused to state their ethnicity (NHS ethnicity code was 'Z'). Fig. 5 Fig. 5 Sankey plot showing potential discrepancies between SNOMED concepts and NHS ethnicity codes mapping.Abbreviations: NHS, National Health Service in the UK; SNOMED, SNOMED-CT records containing ethnicity concepts. . • Hospital admissions data: Hospital Episode Statistics for admitted patient care (HES-APC) • Mortality information from the Office for National Statistics (ONS): Civil Registration of Deaths. High-level ethnic groups 19 categories Reduction of the categories by merging them into more generalised groups 5-6 categories* as recorded 19 categories 489 different concepts available *The Unknown category is not always included Table 1 . NHS ethnicity codes for Ethnicity (A-Z) available in GDPPR and corresponding categories in Census (2021), Census (2011), Hospital Episode Statistics, and notation before 2001.The Hospital Episode Statistics for admitted patient care (HES-APC) data uses census notation based on the 2001 Census.'Z: not stated' indicates that the person was asked and either refused to provide this information or was genuinely unable to choose a response.'X: Not known' indicates that the person was not asked or was not in a condition to be asked (e.g., unconscious).Abbreviations: NHS, National Health Statistics. White and Asian (Mixed) G Any other Mixed background Most granular classification i.e., the most specificTurkish/Turkish Cypriot (NMO) (ethnic group) British Asian -ethnic category 2001 census (finding) Sri Lankan -ethnic category 2001 census (finding) Afro-Caribbean (ethnic group) Italian -ethnic category 2001 census (finding) NHS England SDE algorithm
v3-fos-license
2018-04-03T02:39:32.013Z
2016-02-19T00:00:00.000
10992522
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep21805.pdf", "pdf_hash": "f2b00fce8cf5d806ffea9a4d2a7b2aabe38a8e2e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41630", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "sha1": "f2b00fce8cf5d806ffea9a4d2a7b2aabe38a8e2e", "year": 2016 }
pes2o/s2orc
Can arbuscular mycorrhizal fungi reduce Cd uptake and alleviate Cd toxicity of Lonicera japonica grown in Cd-added soils? A greenhouse pot experiment was conducted to study the impact of arbuscular mycorrhizal fungi−Glomus versiforme (Gv) and Rhizophagus intraradices (Ri) on the growth, Cd uptake, antioxidant indices [glutathione reductase (GR), ascorbate peroxidase (APX), superoxide dismutase (SOD), catalase (CAT), ascorbate (ASA), glutathione (GSH) and malonaldehyde (MDA)] and phytochelatins (PCs) production of Lonicera japonica in Cd-amended soils. Gv and Ri significantly increased P acquisition, biomass of shoots and roots at all Cd treatments. Gv significantly decreased Cd concentrations in shoots and roots, and Ri also obviously reduced Cd concentrations in shoots but increased Cd concentrations in roots. Meanwhile, activities of CAT, APX and GR, and contents of ASA and PCs were remarkably higher in Gv/Ri-inoculated plants than those of uninoculated plants, but lower MDA and GSH contents in Gv/Ri-inoculated plants were found. In conclusion, Gv and Ri symbiosis alleviated Cd toxicity of L. japonica through the decline of shoot Cd concentrations and the improvement of P nutrition, PCs content and activities of GR, CAT, APX in inoculated plants, and then improved plant growth. The decrease of shoot Cd concentrations in L. japonica inoculated with Gv/Ri would provide a clue for safe production of this plant from Cd-contaminated soils. However, the overall mechanisms by which AMF alleviate HM phytotoxicity have still been not completely understood, with controversial outcomes depending on the interactions of specific plant, fungus and HM species. Lonicera japonica Thunb., a medicinal and an ornamental plant for vertical gardening, has been widely planted in temperate and tropical regions in the past 150 years 23 . It possesses many characteristics, such as high biomass, deep root, easy cultivation, wide geographic distribution and strong resistance to environmental stress 24 . Recently, Liu et al. and Jia et al. have found that L. japonica had a strong capability in Cd accumulation, which would bring out a threat to safe production of this plant [25][26][27] . So far, no information has been available on the role of AMF in Cd uptake and Cd toxicity relief in L. japonica. In this study, we explored whether AMF-Glomus versiforme (Gv) and Rhizophagus intraradices (Ri) could reduce Cd uptake and alleviate Cd phytotoxicity in L. japonica planted in Cd-amended soils (0, 10 and 20 μ g Cd g −1 ), and further provided an enlightenment to the mechanism of Cd toxicity relief in mycorrhizal plant through determining plants biomass, Cd concentration, antioxidant activities and PCs production in plants. Results Two-way ANOVA with AMF inoculation, Cd addition and their interaction were shown in Table 1. Each of the two factors separately generated significant differences in all variables, with the exception of SOD activity and soil DTPA-Cd concentration (AMF inoculation) and shoot biomass (Cd addition). Moreover, the Cd × AMF interaction generated significant changes in both shoot and root Cd concentrations and PCs content, and also resulted in evident changes in all antioxidative parameters, with the exception of SOD activity and GSH content (Table 1). Mycorrhizal colonization rate. Mycorrhizal colonization of L. japonica was observed in the all inoculation groups (Fig. 1). Mycorrhizal colonization rates of L. japonica were quite high, from 91% to 96% for Gv and from 89% to 96% for Ri, respectively. Compared with the Cd-unadded soil, mycorrhizal colonization rate was hardly affected by Cd addition. Moreover, hyphae or vesicles were not found in the uninoculation controls. Plant growth and P acquisition. The positive effect of both Gv and Ri inoculations on the dry weight and P acquisition of the shoot and root of L. japonica at the all Cd levels were shown in Fig. 2. The biomass of L. japonica inoculated with AMF were significantly (P < 0.05) elevated in the soils added with 0, 10, 20 μ g Cd g −1 , with the increases of 444%, 248%, 163% for Gv and 625%, 176%, 212% for Ri in the shoots ( Fig. 2A), and 598%, 425%, 186% for Gv and 648%, 301%, 206% for Ri in the roots (Fig. 2C), respectively, compared with the uninoculation control. Similarly, the evident increases (P < 0.05) of P concentrations in mycorrhizal L. japonica in 0, 10, 20 μ g Cd g −1 soils, were also observed, which were 11%, 15%, 8% for Gv and 13%, 7%, 12% for Ri in the shoots (Fig. 2B), and 12%, 10%, 8% for Gv and 15%, 15%, 14% for Ri in the roots (Fig. 2D), respectively. Plant Cd concentrations and soil DTPA-extractable Cd. AMF inoculations significantly (P < 0.05) influenced Cd concentrations in the shoots and roots of L. japonica (Fig. 3). Compared with uninoculation groups, Cd concentrations in plants inoculated with Gv in 10 and 20 μ g Cd g −1 soil were markedly (P < 0.05) reduced by 47% and 76% in the shoots (Fig. 3A) and 32% and 49% in the roots (Fig. 3B), respectively. Furthermore, Ri inoculation evidently (P < 0.05) reduced Cd concentrations in the shoots, with the reductions of 69% and 54% (Fig. 3A), but obviously (P < 0.05) improved Cd concentrations in the roots, with the increases of 62% to 28% Values are presented as means ± SD for the five replicates. An asterisk (*) within each arbuscular mycorrhizal fungus denotes that there is a significant difference between Cd-added and Cd-unadded soils according to the Tukey test at the 5% level. Values are presented as means ± SD for the five replicates. An asterisk (*) within each Cd concentration denotes that there is a significant difference between inoculation and uninoculation treatments according to the Tukey test at the 5% level. levels, with the increases from 65% to 146% for Gv and 107% to 184% for Ri in CAT (Fig. 4B), 36% to 134% for Gv and 108% to 305% for Ri in APX (Fig. 4C), and 39% to 278% for Gv and 74% to 124% for Ri in GR (Fig. 4D), respectively. However, both Gv and Ri colonization had no impact on SOD activities at the all Cd levels (Fig. 4A). Acting as antioxidants, the ASA contents in mycorrhizal plants were obviously (P < 0.05) increased, with the increases from 53% to 110% for Gv and 66% to 128% for Ri (Fig. 5A), but the GSH contents in mycorrhizal plants were evidently (P < 0.05) decreased, with the decreases from 13% to 16% for Gv and 10% to 21% for Ri, respectively (Fig. 5B), compared with non-mycorrhizal plants at all tested Cd levels. In addition, as an indicator of lipid peroxidation, MDA contents in inoculated plants had a pronounced (P < 0.05) decrease at all Cd levels, with the decreases from 20% to 30% for Gv and 11% to 24% for Ri (Fig. 5C), compared with uninocualted controls. Phytochelatins. The PCs contents in L. japonica with and without AMF were observed (Fig. 5D). The presence of AMF obviously (P < 0.05) increased the PCs production in mycorrhizal plants at all Cd levels, with the increases from 11% to 29% for Gv and 29% to 71% for Ri, respectively, compared with non-inoculated plants. Discussion Previous studies have indicated that Cd addition in the soil did not inhibit the formation of external hyphae and mycorrhizal colonization. For example, Chen et al. found that soil Cd contaminations (0 to 100 μ g g −1 ) did not affect Funneliformis mosseae colonization to Zea mays 18 , and Jiang et al. also discovered that F. mosseae colonization to Solanum nigrum remained unaffected in 0-40 μ g Cd g −1 soil 28 , which are in accord with our present outcomes. The present results displayed that both Gv and Ri presented high resistance to Cd and could well associate with L. japonica. Cd stress could inhibit roots growth and nutrition absorption especially P, thus affected the whole plant growth 29,30 . However, AMF may improve nutritional status and plant growth by the large surface area of their hyphae 31 . In the present work, the P absorption and biomass were observably increased in the shoot and root of L. japonica with both Gv and Ri inoculation. These results were similar to previous findings 10,32,33 , in which AMF colonization promoted P acquisition and plant growth. Some studies have reported that AMF could immobilize HMs in the mycorrhizosphere and inhibit their translocation to the shoots. For instance, Bissonnette et al. reported that Rhizophagus intraradices inoculation in Salix viminalis reduced Cd concentrations in the shoots, but increased Cd concentrations in the roots 34 . Similarly, our previous study also indicated that Cd concentrations in the shoots were decreased, but significantly increased in the roots of Solanum photeinocarpum by Glomus versiforme colonization 21 . Moreover, Wu et al. also found that mycorrhizal inoculation remakably decreased As concentrations in husk, straw and root of upland rice grown in As-added soils (70 μ g As g −1 soil) 14 . In the present study, Gv inoculation significantly reduced Cd concentrations in the roots and shoots, and Ri presence also evidently reduced Cd concentrations in the shoots but increased Cd concentrations in the roots of L. japonica. The reduced shoot Cd concentration in mycorrhizal L. japonica could be explained by the possible mechanisms (1) mycorrhizal hyphae can serve as a Cd pool to prevent Cd translocation to shoots by adsorbing and binding Cd 35,36 , and (2) the "dilution effects" linked to an increased plant biomass and a decreased Cd allocation to above-ground tissues 37,38 . In a word, both Gv and Ri colonization significantly reduced Cd concentrations in the shoots of L. japonica, which would provide a clue for safe production of this plant from Cd-contaminated soils. SOD is involved in converting superoxide to H 2 O 2 , and CAT, POD and APX are mainly responsible for the dismutation of H 2 O 2 to H 2 O and O 2 . Liu et al. reported that the activities of SOD, POD and CAT in the marigold inoculated with R. intraradices were higher than those of the uninoculation plants under Cd stress 39 . Similarly, Garg and Aggarwal observed that Glomus mosseae colonization significantly increased activities of SOD, CAT and POD in Cajanus cajan grown in the Cd and/or Pb contaminated soils 40 . Our previous experiment also indicated that Solanum photeinocarpum with G. versiforme and S. nigrum with F. mosseae had a higher activity of APX, POD and CAT than the uninoculation plants in Cd-added soils 21,28 . In the present study, the enhancement of CAT and APX activities in mycorrhizal plants suggested that both Gv and Ri colonization helped L. japonica to alleviate oxidative stress. In antioxidative metabolisms, GSH, ASA and GR play an important role in removing H 2 O 2 by the ascorbate-glutathione pathway 41 . The present studies exhibited that GR activity of mycorrhizal plant was increased at all Cd levels by comparing with non-mycorrhizal plant, which was similar with previous findings. For example, Garg and Kaur reported that GR activity had an increase in AMF-inoculated C. cajan under Cd and/ or Zn stresses 8 . Garg and Aggarwal observed also that G. mosseae inoculation evidently enhanced GR activity in C. cajan grown in the Cd and/or Pb contaminated soils 40 . Moreover, the contents of GSH and ASA in L. japonica were affected by Gv/Ri symbiosis. These results indicated both Gv and Ri inoculation had an influence on the ascorbate-glutathione pathway in L. japonica. Lipid peroxidation is initiated due to oxidative stress, and high MDA accumulation shows severe lipid peroxidation. In this work, the MDA contents in inoculated plants were obviously reduced compared with uninoculated plants at all Cd levels, which further showed that both Gv and Ri inoculation reduced the Cd-induced oxidative stress in L. japonica. PCs are cysteine-rich peptides synthesized from GSH in the presence of metal ions, and they are involved in metal detoxification 42 . As far as we are aware, there are few reports about the impact of AMF on PCs production under HM stress, and the results are controversial in different studies. For example, Garg and Kaur found G. mosseae colonization evidently increased PCs contents in C. cajan under Cd and/or Zn stress 8 . However, our previous study indicated that PCs synthesis in S. nigrum grown in different Cd-amended soil was not affected by F. mosseae colonization 28 . In the present study, the augment of PCs contents in mycorrhizal plants inferred that Gv/ Ri-inoculated L. japonica may be more effective in alleviating Cd toxicity. In addition, present study showed that the GSH contents in inoculated plants were reduce compared with uninoculated plants, which might be chiefly attributed to the improvement of PCs synthesis in mycorrhizal plants. Conclusions In our present study, the impacts of both Gv and Ri symbiosis on Cd uptake and some physiological parameters of L. japonica planted in Cd-amended soils were investigated, and conclusions were made as follows: Firstly, both Gv and Ri inoculation greatly improved plant growth due to increasing P acquisition. Secondly, Gv inoculation significantly reduced Cd concentrations in the roots and shoots, and Ri presence also significantly reduced Cd concentrations in the shoots but increased Cd concentrations in the roots of L. japonica. The decrease of Cd concentrations in the shoots of Gv/Ri-inoculated L. japonica would provide a clue for safe production of this plant from Cd-contaminated soils. Finally, activities of CAT, APX and GR, and PCs production and ASA contents in inoculated plant were higher than those of uninoculated plant, but the lower GSH and MDA contents were measured in the inoculated plants. In order to further explore the mechanisms of Cd toxicity relieving by AMF, we will turn to the molecular biology and proteomics researches in AMF-inoculated L. japonica planted in Cd-contaminated soil. Methods Materials preparation. Loamy soils used in the experiment were as described by Liu et al. 43 , with the following characteristics: pH 6.85 (1:1 w/v water), organic content 1.65%, available P 52 μ g g −1 , total Cd 0.12 μ g g −1 and DTPA-extractable Cd 0.063 μ g g −1 . The soil was sieved to pass a 2 mm mesh and autoclaved (121 °C, 2 h) to sterilization. Before use, the sterile soil was divided into three aliquots amended with 0 (control), 10, 20 μ g Cd g −1 soil (supplied as CdCl 2 ), respectively. At the same time, Cd-added soil was subjected to equilibrium with aseptic water saturating for one month and air drying for one month in a controlled greenhouse at 28/22 °C with 14/10 d/ night. Glomus versiforme (Gv) and Rhizophagus intraradices (Ri) obtained from the Beijing Academy of Agriculture and Forestry, China, were propagated by Zea mays as the host plant growing in 2-L pots containing a 1:1 (v/v) mixture of soil and sand. After five months, the roots were cut into pieces and evenly mixed with the culture medium including rhizosphere soil, hyphae and spores, and all of the mixtures were used as AMF inocula. Pot experiment. There were three Cd levels (0, 10, 20 μ g Cd g −1 soil) and three AMF inoculations (with Ri, with Gv and without AMF) in a full randomized design with five replicates per treatment for a total of 45 experimental units. The soil (2.1 kg) mentioned above was loaded into each pot (height 14 cm, bottom diameter 13 cm and top diameter 16 cm). Inoculated treatment was implemented by mixing 85 g mycorrhizal inocula in each pot. Each pot of the non-mycorrhizal treatments received the same amount of autoclaved inocula (121 °C for 2 h) together with a 30-ml aliquot of a filtrate (11 μ m) of the AM inoculum for adding the microbial population free of AM propagules 44 . Scientific RepoRts | 6:21805 | DOI: 10.1038/srep21805 The seeds of L. japonica were sterilized with 10% NaClO for 10 min, and washed with sterile water, and then germinated on sterilized sand in light-controlled incubator at 20 °C with 16/8 d/night regime. Four uniform seedlings were transplanted to each pot and grown in a controlled greenhouse at 28/22 °C with 14/10 d/night, and 60% of the water holding capacity. Water loss was compensated with sterile water every day, after weighing pots. Sampling. After 4 months, all plants of each pot were separated into root and shoot after harvesting. Fresh leaves were lyophilized and kept in vacuum desiccators for the physiological measurement. Roots were immersed in 0.01 M ethylene diamine tetraacetic acid (EDTA) for 30 min, and then washed with deionized water to remove metal ions of root surface 45 . In addition, the rhizosphere soils were sampled for further analysis. Mycorrhizal colonization. Cuttings of cleaned roots (1 cm) were softened in 10% KOH (w/w) for 30 min at 90 °C Water-both, bleached in 10% H 2 O 2 for 30 min and acidified in 1% HCl for 3 min at 24 °C. Subsequently, roots were stained with 0.05% Trypan Blue (w/w) at 90 °C for 30 min and kept in lactic acid-glycerol solution (v/v 1:1) 46 . Forty pieces of fine roots collected from each pot were analysed and the AMF colonization rate was calculated according to the grid-line intersect method of Giovannetti and Mosse 47 . Plant and soil analysis. The shoots and roots were weighed after drying at 80 °C for 3 days. The dried samples were measured Cd and P concentrations after ground and digested in a tri-acid mixture (5:1:1 HNO 3 :H 2 SO 4 :HClO 4 ) at 225 °C for atomic absorption spectrophotometry (AAS) (Z-2000, Hitachi, Japan) and molybdenum-ascorbic acid spectrophotometry 48 , respectively. DTPA-extractable Cd concentrations in rhizosphere soils were measured using the methods described by Zan et al. 49 . The SOD activity was determined based on SOD's ability to inhibit the reduction of nitroblue tetrazolium (NBT) by − O 2 radical 50 . The APX activity was measured as the decrease in absorbance at 290 nm, according to APX's ability to catalyze the oxidation of ascorbate 51 . The GR activity was measured by the reduction of absorbance at 340 nm due to NADPH oxidation 52 . The CAT activity was tested on the basis of the consumption rate of H 2 O 2 at 240 nm 53 . The ASA was measured according to the method of Law et al. using dipyridyl as the substrate 54 . MDA was tested as Heath and Packer described method by thiobarbituric acid (TBA) reaction 55 . GSH content was estimated by the method of O-Phthalaldehyde (OPA) fluorescence derivatization 56 . PCs content was measured as the difference between non-protein thiols (NPT) and GSH 57 . NPT content was assessed by the method of Ellman using 5,5′ -dithiobis-(2-nitrobenzoic acid) (DTNB) as the substrate 58 . Statistical analysis. Data presented were means of five replicates, and appropriate transformations on the data were made prior to analysis to decrease the heterogeneity of the variance. The effects of mycorrhizal inoculation and Cd addition level and their interactions on measured variables were assessed by a two-way analysis of variance (ANOVA) at p < 0.05, 0.01, or 0.001. Means were compared using the Tukey test at p < 0.05. In all cases, statistical analyses were performed using the SPSS 17.0 (SPSS, Inc., Chicago, IL, USA).
v3-fos-license
2022-02-19T16:12:18.769Z
2022-02-17T00:00:00.000
246954458
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://doi.org/10.21203/rs.3.rs-1361227/v1", "pdf_hash": "f1816762f6fe9a71046b9241ce1f9aaeda7dbe97", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41634", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "cb4cb82110fa9d52e6df541a934c7e7a81f14dd9", "year": 2022 }
pes2o/s2orc
Examining Wireless Networks Encryption by Simulation of Attacks Wireless LANs widespread use is attributable to a combination of factors, including simple construction, employee convenience, connection selection convenience, and the ability to support continual movement from residences to large corporate networks. For organizations, however, the availability of wireless LAN means an increased danger of cyberattacks and challenges, according to IT professionals and security specialists. In this paper examines many of the security concerns and vulnerabilities associated with the IEEE 802.11 Wireless LAN encryption standard, as well as typical cyber threats and attacks affecting wireless LAN systems in homes and organizations, and provide general guidance and suggestions for home and business users. Introduction WLAN is the most widely recognized wireless broadband technology capable of high transmission rates; Wi-Fi allows users to access the Internet without using cables from anywhere. The Omnet++ tool is used to coordinate the operation of a group of Access Points (APs) [11], each supporting a distinct WLAN technology standard, which are deployed to provide a variety of applications, for multiple WLAN standards like (802.11b, infrared, 802.11 frequency hopping) [10]. We discussed the various metrics, such as WLAN load, WLAN delay, WLAN throughput, media latency, TCP churn, and queue size through simulation. Wi-Fi stands for "Wireless Fidelity." Wi-Fi is an alias for IEEE 802.11 Wireless Personal Area Network (WLAN), a technology that allows electronic devices to connect to a wireless network, particularly those that adopt the 2.4GHz and 5GHz radio bands. Wi-Fi is a WLAN communication technology that is segmented into various IEEE 802.11 standards and described by extensions. The 802.11 standard describes numerous physical layers and characteristics of Wi-Fi technologies. VHT (Very High Throughput) is the most recent new physical layer, and it is described in an upgrade to the IEEE 802.11ac standard. Emulation, coding systems, and debugging are all tasks that PHY is in charge of [1]. 802.11 Wireless LAN has evolved and altered the entire network landscape in recent years. Ethernet is being phased out in favor of 802.11n [12]. It is the network method that allows for the rapid deployment of mobile devices, particularly in locations where there is a high demand for WLAN, such as homes, educational institutions, commercial and government offices, airports, buildings, military facilities, cafes, libraries, and other locations. WLAN also draws the majority of mobile wireless devices to companies and consumers all over the world due to it's ease and flexibility. Anyone with a basic understanding of computer networking may set up their own wireless network using the low-cost, easy-to-use installation methods and equipment. However, as wireless networks have grown in size as a result of improvements in technology, the threats have increased for home users and small enterprises, as well as major corporations. A WLAN uses radio waves to communicate. As a result, all network users in the first and second layers would be exposed to radio frequency listening, which is one of the most significant security vulnerabilities [2]. IEEE standard security for wireless networks is one of the most serious security weaknesses. The 802.11i standard, also known as Wi-Fi Protected Access (WPA) [1], was established by the Wi-Fi Alliance to address serious security weaknesses in the WEP standard. Related Work A lot of work have demonstrated the IEEE 802.11i standard which does not protect against eavesdropping and different denial-of-service attacks, such as electronic authentication and disengagement cyberattacks [13,14]. Furthermore, the flexibility and backward compatibility of WEP's 802.11i pre-shared key placement allowed using a vocabulary and brute force cyberattacks easier for most hackers [3]. Experiments also found that fewer of Wi-Fi networks were discovered using the outdated WEP encryption protocol, which has already been proven to be broken in a little over a second using freely available hacking tools [4]. As a result, wireless LAN security remains a major problem in both residential and business networks. Along with their flexibility, efficiency, simplicity of access, installation, and cost savings, wirelesses LANs have surpassed conventional networks, like video application [15]. However, as a result of this expansion, wireless networks will face more vulnerabilities and difficulties in terms of attacker targeting and the possibilities of this work [16]. To transfer data over the air, wireless networks employ radio or infrared beams. Wireless networks have a large monitoring range within which an attacker may monitor the network, which endanger the data's integrity. In the face of this space of sabotage for attackers, protecting the wireless network is a big issue for IT security practitioners and system administrators [5,[17][18][19][20]. This paper outlines the IEEE 802.11 security standard's weaknesses as a security concern, as well as the primary known attacks/threats to residential and corporate wireless LAN systems. The remainder of the paper is structured as follows: In Section Two, we will go through a quick overview of WLANs. In Section III, relevant work is provided. Section IV discusses common vulnerabilities and security problems linked to the IEEE 802.11 security standard and WLAN. Following that, a detailed review of prevalent WLAN risks and cyberattacks is presented. Section VI contains general recommendations and an overall suggestion, whereas Section VII contains the conclusion. IEEE 802.11 AND ADVANCEMENT IEEE defines and implements a variety of protocols for the electrical and computer sectors, such as Wi-Fi 802.11, Ethernet, and IEEE 802.3. The IEEE presently has over 1,100 commercial standards in use, with another 600 in the development. IEEE 802 LANs are one of the most well-known standards, while IEEE 802.11 is among the most common [21][22][23]. IEEE 802 STANDARD All Wi-Fi systems for multiple geographical areas networking (LAN/MAN) are covered under the IEEE 802 standard. The IEEE 802.11 series is responsible for Wi-Fi protocols. A suffix letter was not included in the initial Wi-Fi standard, which was issued in 1997. When further variants were produced, however, a suffix letter was added to identify the actual variant. This was a lowercase letter. 802.11A STANDARD This standard was the first in the 802.11 series of Wi-Fi technologies. A wireless carrier was suggested using orthogonal frequency division multiplexing in the ISM 5 GHz band with data rates of up to 54 Mbps [24]. 802.11a was exactly as popular as 802.11b, despite it's widespread use. Although the 5GHz band was actually larger and could handle more channels, it was more costly at the time, limiting it's adoption. STANDARD 802.11B It has considerably more widespread adoption than the 11a standard. Although the highest raw data rates were just 11 Mbps, the standard utilized the 2.4 GHz ISM band, which was cheaper at the time. Furthermore, Wi-Fi usage was vastly smaller during time, and interference was not as widespread as it is now. STANDARD 802.11G The 802.11b standard was developed in response to the need for faster 2.4 GHz Wi-Fi. 802.11g achieves raw data transmission rates of 54 Mbps by using OFDM technology. It is also a DSSS available digitally, meaning it could communicate at the slower 802.11b rate. Backwards compatibility was necessary because of the large number of outdated access points and PCs that may only support the previous standard, so it is a challenge. WLAN VULNERABILITIES Wireless LANs have exceeded conventional networks in popularity with high flexibility, cost-effectiveness, and ease of installation. However, as WLANs have grown in popularity, the hacker's possibilities have expanded. WLANs, unlike wired networks, deliver data over the air via radio frequency or infrared transmission. An attacker may monitor a wireless connection and, in the worst-case scenario, compromise data integrity using current wireless technologies. When it comes to securing a WLAN, there are several security considerations that IT security practitioners and system administrators must address [5]. With 802.11 networks, radio frequency interference is a major concern. The majority of wireless LAN protocols, as well as the other devices such as Bluetooth, wireless phones, and microwave broadcasts, use the 2.4GHz channel frequency range. This can cause signal interference and the termination of a valid user [7,8]. WLANs suffer a distinct set of vulnerabilities than cable LANs due to their inability to properly restrict radio waves. Even if businesses set up their own access points and use antennas to guide their signals in a certain direction, it is impossible to entirely prevent wireless broadcasts from reaching undesired locations like nearby lobbies, semi-public areas, and parking lots. As a result, hackers will have easier time obtaining sensitive information [8,25]. WLAN General Attacks / Threats An attack is an activity taken by an intruder in attempt to compromise the organization's information. Wireless local area networks (WLANs), unlike wired networks; communicate via radio frequency or infrared transmission technologies, rendering them open to cyberattack. These attacks are designed to compromise information confidentiality, integrity, and network availability. As shown in figure 1, the following are the two types of attacks: ❼Negative attacks. ❼Active attacks. Passive attacks are ones in which the attacker attempts to get information sent or received by the network. Because the attacker does not alter the contents of the file, these cyberattacks are generally difficult to detect [9,26,27]. Traffic analysis and mottling are the two forms of passive attacks [28]. In Active cyberattacks, on the other hand, the attacker not only obtains access to the network's data, but also actively alters or produces fake data on the network. Any business will incur a considerable loss as a result of such nefarious behavior [9]. Emulator environment We scanned the network through the use of the Omnet++ program, which is linked to the NETA and INET platforms, and through them we created a simulated network for the network to be examined. The shape of the network to be examined is of the type IEEE 802, and the network consists of 20 broadcast points that are normal and a variable number of attacking points that we will specify while running the emulator. All these normal and attack points will be connected to a single network within a specific geographical range as in figure 2. Examination process The number of repetitions in one scenario is thirty times, for each scenario there is a change in the number of attacking points, which is 5, 10 meaning a quarter, half of the number of points in the scenario. There is also a variable in each scenario, which is the number of dropped messages, which were set at 0.1, 0.4 and 0.8 for each scenario. Thus, the total number of completed trials is 9 for each protocol UDP, TCP. Results when using the UDP protocol Firstly we will present UDP scenario. The time between the ends that the packet takes when transmitting over the network, and this time is determined by factors in terms of propagation time, transmission time, and finally processing time, in addition to the number of routers. We notice from the figure 3 that the value of the termination time has increased with the increase in the number of attackers in the network, and that the number of points also increased with the increase in the number of attackers. Therefore, the network will become more difficult to spread and process data between nodes, due to service interruption. The number of messages that were received correctly without errors, as shown in figure 4, called the CDR, is a ratio that constitutes the total number of those messages over the number of messages expected to be sent in the network. Through the following figure, we can see that ratio between the two networks that were examined Results when using TCP protocol In this part we will discuss TCP scenario. In figures 5,6,7 we can illustrate: packet drop, Avg loss rate and number of collision. It is only logical that the number of dropped packets rises in perfect agreement with the number of attackers in the network, as shown in figure 5, and with the change in the probability of losing the utilized packets 1, 4, and 8, we also see a convergence between the levels of all these possibilities. For real-time intra-network communications flows, the PLR is an essential performance metric. Because the smoothness and simplicity of transmission of these data streams are assured, the number of lost or missing packets during transmission must be maintained to the minimum. During the transmission period, it is determined by the PLR computation as follows: Ntx and Nrx denote the total number of packets transmitted and received, respectively. This analysis may be completed quickly by extracting all realtime packet sizes transmitted and received. The packet collision rate is the number of data packet collisions that occur in a network during a particular time period. This will show how frequently data packets are collided or lost due to collisions. The packet collision rate is expressed as a percentage of data packets successfully delivered. When two or more nodes in a network try to send data at the same time, packet collisions occur, resulting in collisions and possibly data loss. Nodes may have to resend packets as a result of this, which can have a detrimental influence on system performance. Because the process is irregular inside the wireless network and is not restricted in time for transmitting and receiving, we observe that the collision counts are random in a TCP network, but we also note that the collisions are within the usual range in any network environment. Conclusions Maintaining the security of wireless network is a never-ending effort. In reality, no single effective security method exists. When a new technology is first launched, hackers examine it for weaknesses and then put together a bundle of software and scripts to try to attack those flaws. These technologies, which are disseminated through an open source network, are becoming more centralized, mechanized, and widely accessible over time. As a consequence, anyone may readily download it. Therefore, we will never be ready to overcome all threats and vulnerabilities, and even if we do, we will waste money defending against certain low-probability, low-impact cyberattacks. On the other hand, if we focus on the most critical problems first, attackers may shift their attention to less difficult targets. As a result, efficient WLAN security will always involve a delicate balance between allowable risks and risk-mitigation techniques. By better understanding company risks, taking action to avoid the most significant and frequent attacks, and implementing industry standards, we can enhance our security solutions. In this study, an OMNET simulator was used to reproduce a based WLAN network with an IEEE 802.11 Wireless LAN working protocol for FTP and TCP applications. The study's main objective was to see how various network standards, such as data transmission delay, enhanced significantly, responsiveness, TCP abort, and throughput, fared in terms of latency. The results demonstrated that improving a wireless network's data throughput lowers time, media access latency, and queue size.
v3-fos-license
2016-05-12T22:15:10.714Z
2011-04-26T00:00:00.000
1346064
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2011.00146/pdf", "pdf_hash": "deb914199b535b783b395041eb66163b3953a6e8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41635", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "1a1ab184f988144edabcb3d69b3a0ba4e16ef58c", "year": 2011 }
pes2o/s2orc
Close Interspecies Interactions between Prokaryotes from Sulfureous Environments Green sulfur bacteria are obligate photolithoautotrophs that require highly reducing conditions for growth and can utilize only a very limited number of carbon substrates. These bacteria thus inhabit a very narrow ecologic niche. However, several green sulfur bacteria have overcome the limits of immobility by entering into a symbiosis with motile Betaproteobacteria in a type of multicellular association termed phototrophic consortia. One of these consortia, “Chlorochromatium aggregatum,” has recently been established as the first culturable model system to elucidate the molecular basis of this symbiotic interaction. It consists of 12–20 green sulfur bacteria epibionts surrounding a central, chemoheterotrophic betaproteobacterium in a highly ordered fashion. Recent genomic, transcriptomic, and proteomic studies of “C. aggregatum” and its epibiont provide insights into the molecular basis and the origin of the stable association between the two very distantly related bacteria. While numerous genes of central metabolic pathways are upregulated during the specific symbiosis and hence involved in the interaction, only a limited number of unique putative symbiosis genes have been detected in the epibiont. Green sulfur bacteria therefore are preadapted to a symbiotic lifestyle. The metabolic coupling between the bacterial partners appears to involve amino acids and highly specific ultrastructures at the contact sites between the cells. Similarly, the interaction in the equally well studied archaeal consortia consisting of Nanoarchaeum equitans and its host Ignicoccus hospitalis is based on the transfer of amino acids while lacking the highly specialized contact sites observed in phototrophic consortia. INTRODUCTION In their natural environment, planktonic bacteria reach total cell numbers of 10 6 ml -1 , whereas in sediments and soils, 10 9 and 10 11 bacterial cells·cm -3 , respectively, have been observed (Faegri et al., 1977;Whitman et al., 1998). Assuming a homogenous distribution, distances between bacterial cells in these environments would amount to 112 μm for planktonic, 10 μm for sediment environments and about 1 μm for soil bacteria (Overmann, 2001b). Taking into account the estimated number of bacterial species in soil that range from 500,000 (Dykhuizen, 1998) to 8.3 × 10 6 (Gans et al., 2005), the closest neighbors of each cell statistically should represent different species. A spatially close association of different bacterial species can result in metabolic complementation or other synergisms. In this context, the most extensively studied example is the conversion of cellulose to methane and carbon dioxide in anoxic habitats. The degradation is only possible by a close cooperation of at least four different groups of bacteria that encompass primary and secondary fermenting bacteria as well as two types of methanogens. Along this anaerobic food chain, end products of one group are exploited by the members downstream the flow of electrons. Although the bacteria involved in the first steps of cellulose degradation do not obligately depend on the accompanying bacteria for provision of growth substrates, they profit energetically from the rapid consumption of their excretion products. This renders their metabolism energetically more favorable or makes some reactions even possible (Bryant, 1979;Zehnder et al., 1982;McInerney, 1986;Schink, 1992). Recent studies of syntrophic communities in Lake Constance profundal sediments yielded new and unexpected results. The dominant sugar-degrading bacteria were not the typical fermenting bacteria that dominate in anaerobic sludge systems or the rumen environment. They rather represented syntrophic bacteria most closely related to the genus Bacillus that could only be grown anaerobically and in coculture with the hydrogen-using methanogen Methanospirillum hungatei (Müller et al., 2008). For efficient syntrophic substrate oxidation, close physical contact of the partner organisms is indispensable. Monocultures of Pelotomaculum thermopropionicum strain SI and Methanothermobacter thermautotrophicus show dispersed growth of the cells. In contrast, cocultures of the two strains formed tight aggregates when grown on propionate, for which the allowed distance for syntrophic propionate oxidation was estimated to be approximately 2 μm (Ishii et al., 2005). Interestingly, the H 2 -consuming partner in syntrophic relationships can be replaced by an H 2purging culture vessel, allowing Syntrophothermus lipocalidus to grow on butyrate and Aminobacterium colombiense on alanine in www.frontiersin.org pure culture (Adams et al., 2006). Thus, the syntrophic associations investigated to date are typically based on efficient H 2 -removal as obligate basis for their interdependence. Additional types of bacterial interactions have been described more recently. Cultures of Pseudomonas aeruginosa were shown to only grow on chitin if in coculture with a chitin degrading bacterium like Aeromonas hydrophila. In addition to simply growing on the degradation products produced by the exoenzymes of the partner, P. aeruginosa induced release of acetate in A. hydrophila by inhibiting its aconitase employing pyocyanin. The resulting incomplete oxidation of chitin to acetate by A. hydrophila is then exploited by P. aeruginosa for its own growth (Jagmann et al., 2010). Although these well-characterized associations certainly are of major ecological relevance in their respective environments, they were typically obtained using standard defined growth media. As a result, presently available laboratory model systems were selected based on their ability to grow readily and -under at least some experimental conditions -on their own in pure culture. Obviously, this cultivation strategy counterselects against bacterial associations of obligately or at least tight interdependence. While the significant advances in the development of cultivation-independent techniques permit a partial analysis of so-far-uncultured associations (Orphan et al., 2001;Blumenberg et al., 2004;Pernthaler et al., 2008), laboratory grown model systems are still indispensable for in-depth studies of gene expression and metabolism. One model system of prokaryotic associations that meanwhile can be grown indefinitely in laboratory culture is the phototrophic consortium "Chlorochromatium aggregatum." This consortium represents the most highly developed bacteriabacteria symbiosis known to date. In parallel, the archaea-archaea association between Ignicoccus hospitalis and Nanoarchaeum equitans has emerged as a second laboratory model over the past years (Huber et al., 2003). A comparison between the two model systems that represent two different domains of life provides first insights into the general principles of tight interactions in the prokaryotic world. CHARACTERIZATION OF PHOTOTROPHIC CONSORTIA AND ESTABLISHING "CHLOROCHROMATIUM AGGREGATUM " AS A MODEL SYSTEM FOR CLOSE BACTERIAL INTERACTIONS Phototrophic consortia were already discovered in 1906 (Lauterborn, 1906) and invariably encompass green or brown-colored bacteria as epibionts that surround a central bacterium in a highly ordered fashion. Several decades later, electron microscopic analyses documented the presence of chlorosomes in the epibiont cells and led to the conclusion that the phototrophic epibionts belong to the green sulfur bacteria (Caldwell and Tiedje, 1975). This was confirmed by the application of fluorescence in situ hybridization employing a highly specific oligonucleotide probe against green sulfur bacterial 16S rRNA (Tuschak et al., 1999). The other partner bacterium of the symbiosis remained much less investigated than its epibionts. First, it had even been overlooked due to its low contrast in the light microscope. Over 90 years later, the central bacterium was identified as a Betaproteobacterium (Fröstl and Overmann, 2000) that exhibits a rod-shaped morphology with tapered ends . Electron microscopy revealed the cells to be monopolarly monotrichously flagellated (Glaeser and Overmann, 2003b). Within the Betaproteobacteria the central bacterium represents a so far isolated phylogenetic lineage belonging to the family of the Comamonadaceae. The closest relatives are Rhodoferax spp., Polaromonas vacuolata and Variovorax paradoxus (Kanzler et al., 2005). Based solely on their morphology, 10 different phototrophic consortia can be distinguished to date (Overmann, 2001a;Overmann and Schubert, 2002). The majority of the morphotypes are motile, motility being conferred by the central colorless bacterium. The 13-69 epibiont cells are either green or brown-colored representatives of the green sulfur bacteria. The smaller consortia like "Chlorochromatium aggregatum" (harboring green epibionts) and "Pelochromatium roseum" (brown epibionts), are barrel shaped and consist of 12-20 epibiont cells . Rather globular in shape and consisting of ≥40 epibionts are the significantly larger consortia "Chlorochromatium magnum" (green epibionts; Fröstl and Overmann, 2000), "Pelochromatium latum" (brown epibionts; Glaeser and Overmann, 2004) and "Pelochromatium roseo-viride" (Gorlenko and Kusnezow, 1972). The latter consortium is the only one harboring two types of epibionts, with brown cells forming an inner layer and green ones an outer layer. "Chloroplana vacuolata" and "Cylindrogloea bactifera" can be distinguished from the other consortia by their immotility and different cell arrangement. "Chloroplana vacuolata" consists of rows of green sulfur bacteria alternating with colorless bacteria forming a flat sheath (Dubinina and Kuznetsov, 1976), with both species containing gas vacuoles. In "Cylindrogloea bactifera," a slime layer containing green sulfur bacteria is surrounding filamentous, colorless bacteria (Perfiliev, 1914;Skuja, 1956). Since they consist of two different types of bacteria, the names of consortia are without standing in nomenclature (Trüper and Pfennig, 1971) and, accordingly, are given here in quotation marks. When 16S rRNA gene sequences of green sulfur bacteria from phototrophic consortia were investigated from a total of 14 different lakes in Europe and North America (Glaeser and Overmann, 2004), a total of 19 different types of epibionts could be detected. Of those, only two types occurred on both, the European and North American continents. Although morphologically identical consortia from one lake always contained just a single epibiont phylotype, morphologically indistinguishable consortia from different lakes frequently harbored phylogenetically different epibionts. Phylogenetic analyses demonstrated that the epibiont sequences do not constitute a monophyletic group within the radiation of green sulfur bacteria. Therefore, it was concluded that the ability to form symbiotic interactions was gained independently by different ancestors of epibionts or, alternatively, was present in the common ancestor of the green sulfur bacteria. In parallel, the phylogeny of central bacteria of phototrophic consortia was investigated. This analysis exploited a rare tandem rrn operon arrangement in these bacteria that involves an unusual short interoperon spacer of 195 bp . Betaproteobacteria with this genomic feature were exclusively encountered in chemocline environments and form a novel, distinct and highly diverse subcluster within the subphylum. Within this cluster, the sequences of central bacteria of phototrophic consortia were found to be polyphyletic. Thus, like in their green sulfur bacterial counterparts, the ability to become a central bacterium may have evolved independently in Frontiers in Microbiology | Microbial Physiology and Metabolism several lineages of betaproteobacteria, or was already present in a common ancestor of the different central bacteria. In the barrelshaped types of consortia, the green sulfur bacterial epibionts have overcome their immotility. However, the existence of two different types of non-motile consortia indicates that motility is not the only advantage gained by green sulfur bacteria that form these interspecies association with heterotrophic bacteria. As described below (see Evidence for Metabolic Coupling), the exchange of metabolites seems to play a major role in the symbiotic interaction, and might therefore be the key selective factor of symbiosis in immotile phototrophic consortia. At present, "Chlorochromatium aggregatum" is the only phototrophic consortium that can be successfully cultivated in the laboratory . From the stable enrichment culture it was possible to isolate the epibiont of the consortium in pure culture using deep agar dilution series supplemented with optimized growth media (Vogl et al., 2006). On the basis of 16S rRNA sequence comparisons, the strain is distantly related to other known green sulfur bacteria (≤94.6% sequence homology) and therefore represents a novel species within the genus Chlorobium, Chlorobium chlorochromatii strain CaD. However, physiological and molecular analyses of the novel isolate did not reveal any major differences to already described strains of the phylum Chlorobi. Thus, C. chlorochromatii CaD is obligately anaerobic and photolithoautotrophic, and photoassimilates acetate and peptone in the presence of sulfide and hydrogen carbonate (Vogl et al., 2006). As a difference to its free-living counterparts the epibiont contains only a low cellular concentration of carotenoids and cannot synthesize chlorobactene. A similar anomaly had also been observed in the brown epibionts of the phototrophic consortium "Pelochromatium roseum" that do not seem to form isorenieratene (Glaeser et al., 2002;Glaeser and Overmann, 2003a). In contrast to the epibiont of "Chlorochromatium aggregatum," all efforts to cultivate the central bacterium in the absence of its epibionts have failed so far. PREADAPTATION OF GREEN SULFUR BACTERIA TO SYMBIOSIS Green sulfur bacteria (Family Chlorobiaceae) constitute a phylogenetically distinct lineage within the phylum Chlorobi of the domain Bacteria (Overmann, 2001a). Recently, the chemotrophic Ignavibacterium album gen. nov. sp. nov., was described (Iino et al., 2010), this novel isolate represents a deeply branching phylogenetic lineage and hence a new class within the phylum Chlorobi, whereas all green sulfur bacteria sensu stricto that are known to date represent strictly anaerobic photolithoautotrophs. Since a considerable number of different 16S rRNA gene sequence types of green sulfur bacteria engaged in a symbiotic association with the central Betaproteobacteria (see section Characterization of Phototrophic Consortia and Establishing "Chlorochromatium aggregatum" as a Model System for Close Bacterial Interactions), green sulfur bacteria may be specifically preadapted to symbiosis and the advent of symbiotic green sulfur bacterial epibionts during evolution may have involved only limited genomic changes. Indeed, several of the physiological characteristics of green sulfur bacteria are regarded as preadaptive traits for interactions with other prokaryotes. One feature of green sulfur bacteria which provides interaction with other prokaryotes is their carbon metabolism. Green sulfur bacteria autotrophically assimilate CO 2 through the reductive tricarboxylic acid cycle. One instantaneous product of photosynthetic fixation of CO 2 is 2-oxoglutarate, and 2-oxo acids represent typical excretion products of photosynthesizing cells (Sirevag and Ormerod, 1970). In natural environments, Chlorobium limicola excretes photosynthetically fixed carbon (Czeczuga and Gradzki, 1973) and thus constitutes a potential electron donor for associated bacteria. Excretion of organic carbon compounds has also been demonstrated for C. chlorochromatii strain CaD, the epibiont of the phototrophic consortium "Chlorochromatium aggregatum" (Pfannes, 2007). Vice versa, green sulfur bacteria can also take advantage of organic carbon compounds produced by other, for example, fermenting, bacteria. During phototrophic growth, they are capable of assimilating pyruvate as well as acetate and propionate through reductive carboxylation in the presence of CO 2 (pyruvate:ferredoxin oxidoreductase; Uyeda and Rabinowitz, 1971) or HCO − 3 − (phosphoenolpyruvate carboxylase; Chollet et al., 1996). The assimilation of organic carbon compounds reduces the amount of electrons required per unit cellular carbon synthesized. This capability thus enhances photosynthetic growth yield and results in a competitive advantage for green sulfur bacteria. In their natural environment, green sulfur bacteria are limited to habitats where light reaches anoxic bottom waters such as in thermally stratified or meromictic lakes. Here, cells encounter conditions favorable for growth exclusively in a rather narrow (typically cm to dm thick) zone of overlap between light and sulfide. Compared to other phototrophs, green sulfur bacteria are extremely low-light adapted and capable of exploiting minute light quantum fluxes by their extraordinarily large photosynthetic antenna complexes, the chlorosomes. In contrast to other photosynthetic antenna complexes, the bacteriochlorophyll c, d, or e molecules in chlorosomes are not attached to a protein scaffold but rather form paracrystalline, tight aggregates (Griebenow and Holzwarth, 1989;Blankenship et al., 1995). Until recently, the heterogeneity of pigments complicated the identification of the structural composition of chlorosomes. When a Chlorobaculum tepidum triple mutant that almost exclusively harbors BChl d was constructed, a syn-anti stacking of monomers and self-assembly of bacteriochlorophylls into tubular elements could be demonstrated within the chlorosomes (Ganapathy et al., 2009). Since they minimize the energetically costly protein synthesis, chlorosomes represent the most effective light harvesting system known. Up to 215,000 ± 80,000 bacteriochlorophyll molecules (in Chlorobaculum tepidum; Montano et al., 2003) can constitute a single chlorosome, that is anchored to 5-10 reaction centers in the cytoplasmic membrane (Amesz, 1991). This ratio of chlorophyll to reaction center is orders of magnitudes higher compared with other photosynthetic antenna structures. In the phycobilisomes of cyanobacteria the ratio is 220:1 (Clement-Metral et al., 1985), 100-140:1 in light harvesting complex II of anoxygenic phototrophic proteobacteria (Van Grondelle et al., 1983) and 28:1 in the light harvesting complex I (Melkozernov et al., 2006). The www.frontiersin.org enormous size of the photosynthetic antenna of green sulfur bacteria enables them to colonize extreme low-light habitats up to depths of 100 m in the Black Sea (Overmann et al., 1992;Marschall et al., 2010) or below layers of other phototrophic organisms like purple sulfur bacteria (Pfennig, 1978). Commensurate with their adaptation to extreme light limitation, green sulfur bacteria also exhibit a significantly reduced maintenance energy requirement compared to other bacteria (Veldhuis and van Gemerden, 1986;Overmann et al., 1992). Chlorobium phylotype BS-1 isolated from the Black Sea maintained a constant level of cellular ATP over 52 days, if exposed to low-light intensities of 0.01 mmol quanta m -2 s -1 (Marschall et al., 2010). The high efficiency of green sulfur bacteria allows them to colonize habitats in which other photosynthetic bacteria are unable to grow. A chemotrophic bacterium that associates with green sulfur bacteria and is capable of exploiting part of their fixed carbon thus would gain a selective advantage during evolution. SELECTIVE ADVANTAGE OF CONSORTIA FORMATION Free-living representatives of green sulfur bacteria are immotile and only species able to produce gas vacuoles can regulate their vertical position. However, changes in buoyant density mediated by gas vesicle production occur only over time periods of several days (Overmann et al., 1991). Due to the motility that is conferred by the flagellated central bacterium, the consortium can orientate itself much faster in light and sulfide gradients and reaches locations with optimal conditions for photosynthesis in a shorter period of time. In fact, "C. aggregatum" has been found to vary its position rapidly across the chemocline in two Tasmanian lakes (Croome and Tyler, 1984). A scotophobic response, that is swimming away from darkness toward light, has been demonstrated for intact consortia in the laboratory Glaeser and Overmann, 2003a) and leads to a rapid accumulation of consortia in (dim)light. In addition, laboratory cultures as well as natural populations of phototrophic consortia exhibit a strong chemotaxis toward sulfide Glaeser and Overmann, 2003b). In contrast to light, the spatial distribution of sulfide does not necessarily occur in a strictly vertical gradient in the natural habitat. In analogy to the presence of point sources of organic carbon substrates (Azam and Malfatti, 2007) that attract chemotrophic aquatic bacteria in zones extending tens to hundred micrometer around the sources (Krembs et al., 1998), organic particles sinking into anoxic water layers may develop into hot spots of sulfate reduction. Due to the motility of phototrophic consortia, the otherwise immotile green sulfur bacteria epibionts would gain rapid access and thus a highly competitive advantage over their free-living relatives especially in laterally inhomogeneous environments. If photosynthetically fixed carbon is indeed transferred to the central bacterium, however, the net balance of the increased availability of sulfide and the loss of electrons to the central bacterial partner must still be positive for the green sulfur bacterium. The tight packing of epibiont cells in the phototrophic consortium raises the question whether dissolved compounds can actually diffuse into the consortium and reach the central bacterium or if the epibionts represent a diffusion barrier around the central bacterium. This question was addressed by adding carboxyfluorescein diacetate succinimidyl ester (CFDA-SE) to intact consortia and following fluorescence in epibionts and central bacteria over time (Bayer, 2007). CFDA-SE enters cells by diffusion and, after cleavage by intracellular esterase enzymes, confers fluorescence to the bacterial cell. Central bacteria were already detectable after 2 min of exposure (Figure 1A), whereas epibionts in the same sample could only be detected after 12 min of incubation with CFDA-SE and developed only weak fluorescence (Figure 2). The fluorescence activity of the central rod remained strong throughout the experiment which is indicative of a higher esterase activity than in the epibiont. Since the central bacterium is presumably heterotrophic, it is likely to express esterases such as lipases at a higher number or intracellular concentration. These results indicate that, even in intact phototrophic consortia, diffusion of small water-soluble molecules toward the central bacterium is not significantly impeded by the surrounding layer of epibionts. Therefore, sensing of sulfide by the central bacterium itself, eliciting the sulfide chemotactic response observed for the consortia is feasible. However, in analogy to the proposal that heterotrophic bacteria colonizing the heterocysts of cyanobacteria may shield them from high ambient O 2 concentrations (Paerl and Kellar, 1978), one could speculate that the consumption of sulfide by the epibionts might decrease the concentration of sulfide reaching the central bacterium. Thus, if sensing of sulfide is carried out by the central bacterium, the photosynthetic activity of the epibiont could have a regulatory function regarding the chemotaxis of the consortium toward sulfide. The orientation of the consortium toward light and sulfide is of special interest since probably neither of these attractants is used in the metabolism of the motile central bacterium. From the perspective of the epibiont, relying on a non-photosynthetic partner for transportation would pose the risk of being carried into dark, and/or sulfide-free unfavorable deeper water layers. Consortia formation thus would either require the expression of a suitable chemotactic response (i.e., toward sulfide) and of a suitable photosensor in the central bacterium or effective means of interspecific communication. Acquisition of these traits must have been a critical stage and happened early during the coevolution of the bacterial partners in phototrophic consortia. In recent ultrastructural studies of the central bacterium of "C. aggregatum," conspicuous 35-nm-thick and up to 1-μm-long zipper-like crystalline structures were found that resemble the chemotaxis receptor Tsr of Escherichia coli (Wanner et al., 2008). In a comparative ultrastructural study of 13 distantly related organisms harboring chemoreceptor arrays from all seven major signaling domain classes, receptors were found to possess an universal structure which has presumably been conserved over long evolutionary distances (Briegel et al., 2009). The prominent ultrastructure discovered in the central bacterium exhibits several similarities to the chemoreceptors reported and provides a first indication that a chemotaxis receptor is present in the central bacterium. One characteristic feature of green sulfur bacteria that provides a basis for interaction with other bacteria is the extracellular deposition of sulfur globules (zero valence sulfur), the initial product of sulfide oxidation during anoxygenic photosynthesis. This sulfur is further oxidized to sulfate only after depletion of sulfide. The extracellular deposition renders the sulfur available to other bacteria such as, for example, sulfur reducers. Therefore, it had initially been proposed that the central bacterium of phototrophic consortia is a sulfate-or sulfur-reducing bacterium. In that case, extracellular sulfur produced by the green sulfur bacteria could be utilized by the central bacterium to establish a close sulfur cycle within the phototrophic consortium (Pfennig, 1980). Such a sulfur cycle has been established in defined syntrophic cocultures of Chlorobium phaeovibrioides and Desulfuromonas acetoxidans. In these cocultures, acetate is oxidized by Desulfuromonas acetoxidans with sulfur as electron acceptor, which leads to a recycling of the sulfide that can then be used again for anoxygenic photosynthesis by Chl. phaeovibrioides. Only minute amounts of sulfide (10 μM) are required to keep this sulfur cycle running (Warthmann et al., 1992). Similarly, sulfate reducers are able to grow syntrophically with green sulfur bacteria with only low equilibrium concentrations of sulfide (Biebl and Pfennig, 1978). In addition, such interactions may also encompass transfer of organic carbon compounds between the partners. In mixed cultures of Desulfovibrio desulfuricans or D. gigas with Chlorobium limicola strain 9330, ethanol is oxidized to acetate with sulfate as electron acceptor and the acetate formed is incorporated by Chl. limicola such that ethanol is completely converted to cell material. However, the hypothesis of a sulfur cycling within phototrophic consortia became less likely by the discovery that the central bacterium belongs to the Betaproteobacteria, whereas only the Deltaproteobacteria or Firmicutes encompass typical sulfur-or sulfate-reducers (Fröstl and Overmann, 2000). As shown above, the exchange of sulfur compounds has been established across a physiologically and phylogenetically diverse range of prokaryotes. But those symbiotic interactions were not accompanied by consortia formation. It is thereby concluded, that sulfur cycling does not appear to be sufficiently selective to explain the advent of phototrophic consortia during evolution. FEATURES OF THE EPIBIONT GENOME THAT RELATE TO SYMBIOSIS In order to make a first assessment of the imprint of symbiotic lifestyle on the genome of the epibiont, the genome features of C. chlorochromatii CaD can be compared to those of Nanoarchaeum equitans and Ignicoccus hospitalis in the archaeal consortia. Nanoarchaeum equitans is the representative of a new archaeal phylum Nanoarchaeota and most likely represents a parasitic epibiont of Ignicoccus. The N. equitans genome comprises only 491 kb and encodes 552 genes, rendering it the smallest genome for an exosymbiont known to date (Waters et al., 2003). Genome reduction includes almost all genes required for the de novo biosynthesis of amino acids, nucleotides, cofactors, and lipids as well as many known pathways for carbon assimilation. Commensurate with this findings, the Nanoarchaeum equitans epibiont appears to acquire its lipids from its host (Waters et al., 2003). Yet, the N. equitans genome harbors only few pseudogenes or regions of non-coding DNA compared with genomes of obligate bacterial symbionts that undergo reductive evolution, and thus is genomically significantly more stable than other obligate parasites. This has been interpreted as evidence for a very ancient relationship between N. equitans and Ignicoccus (Brochier et al., 2005). The recent bioinformatic analysis also suggest that N. equitans represents a derived, fast-evolving euryarchaeal lineage rather than the representative of a deep-branching basal phylum (Brochier et al., 2005). With a size of 1.30 Mbp and 1494 predicted open reading frames (ORFs), the genome of I. hospitalis shows a pronounced www.frontiersin.org genome reduction that has been attributed to the reduced metabolic complexity of its anaerobic and autotrophic lifestyle and an highly efficient adaptation to the low energy yield of its metabolism (Podar et al., 2008). Similar to I. hospitalis, the genome size of Pelagibacter ubique (1.31 Mbp and 1354 ORFs) so far marks the lower limit of freeliving organisms. It exhibits hallmarks of genome streamlining such as the lack of pseudogenes, introns, transposons, extrachromosomal elements, only few paralogs, and the shortest intergenic spacers yet observed for any cell (Giovannoni et al., 2005). This likely reduces the costs of cellular replication (Mira et al., 2001). As a second feature, the genome of P. ubique has a G:C content of 29.7% that may decrease its cellular requirements for fixed nitrogen (Dufresne et al., 2005). By comparison, a much larger genome size of 2.57 Mbp has been determined for C. chlorochromatii CaD. This size represents the average value of the 11 other publicly available genomes of green sulfur bacteria (Table 1). Thus, a reduction in genome size that is characteristic for bacterial endosymbionts (Andersson and Kurland, 1998;Moran et al., 2002Moran et al., , 2008 and for the archaeal consortium (Podar et al., 2008) did not occur during the evolution of the epibiont of "C. aggregatum." This suggests (i) a shorter period of coevolution of the two partner bacteria in phototrophic consortia, (ii) a significantly slower rate of evolution of their genomes, or (iii) that the genome of C. chlorochromatii is not undergoing a streamlining process as observed in other symbiotic associations. The latter suggestion would indicate a lack of selective advantage for "Chl. aggregatum" from genome streamlining. Wet-lab and in silico analyses of the epibiont genome revealed the presence of several putative symbiosis genes. An initial combination of suppression subtractive hybridization with bioinformatics approaches identified four ORFs as candidates. Two of the ORFs (Cag0614 and 0616) exhibit similarities to putative filamentous hemagglutinins that harbor RGD (arginine-glycineaspartate) tripeptides. In pathogenic bacteria, hemagglutinins with these motifs are involved in the attachment to mammalian cells . Most notably, a comparative study of 580 sequenced prokaryotic genomes revealed that Cag 0614 and 0616 represent the largest genes detected in prokaryotes so far. In fact, Cag 0616 is only surpassed in length by the exons of the human titin gene (Reva and Tümmler, 2008). The two other genes detected (Cag1919 and 1920) resembles repeats in toxin (RTX)-like proteins and hemolysins, respectively. All four genes have in common that they are unique in C. chlorochromatii CaD and that certain domains of their inferred products are only known from bacterial virulence factors. If the four genes were not misassigned, they are potentially involved in the symbiotic interaction between the two partner bacteria in phototrophic consortia. To identify additional symbiosis genes, in silico subtractive hybridization between the genome sequence of C. chlorochromatii CaD and the other 11 sequences of green sulfur bacterial genomes was performed. This yielded 186 ORFs unique to the epibiont (Wenter et al., 2010), 99 of which encode for hypothetical proteins of yet unknown function. Although this provides a large number of putative symbiosis genes, the numbers are rather low compared to the unique and unknown ORFs in the other green sulfur bacteria (Table 1). Even if it is assumed that all of these unknown genes encode for proteins involved in symbiosis, this number is still rather small compared to the 1387 genes encoding niche-specific functions in enterohemorrhagic E. coli O157:H7 (Perna et al., 2001). Low numbers of niche-specific genes have been reported for Salmonella enterica or Bacillus anthracis and have been interpreted as indication for preadaptation of the non-pathogenic ancestor. This supports the hypothesis of a preadaptation of green sulfur bacteria to symbiosis. From a broader perspective, the discovery of putative symbiosis genes in the epibiont genome that resemble typical bacterial virulence factors suggest that modules thought to be limited to bacterial pathogens are employed in a much wider biological context. THE REGULATORY RESPONSE EVOKED BY SYMBIOSIS INVOLVES GENES OF THE NITROGEN METABOLISM When the proteome of C. chlorochromatii CaD in the freeliving state was compared to that of the symbiotic state by 2-D differential gel electrophoresis (2-D DIGE), it became apparent that symbiosis-specific regulation involves genes of central metabolic pathways rather than symbiosis-specific genes (Wenter et al., 2010). In the soluble proteome, 54 proteins were expressed exclusively in consortia. Among them were a considerable number of proteins involved in amino acid metabolism. These included glutamate synthase, 2-isopropylmalate synthase, and the nitrogen regulatory protein P-II. The latter showed the highest overall upregulation that amounted to a 189-fold increase in transcript abundance as determined by subsequent RT-qPCR. It is thereby concluded that the amino acid requirement in the consortium is higher than in the epibiont in pure culture. Parallel investigations of the membrane proteome revealed that a branched chain amino acid ABC-transporter binding protein was expressed only in the associated state of the epibiont. Interestingly, the expression of the ABC-transporter binding protein could also be induced in the free-living state by addition of sterile filtered supernatant of the consortia culture, but not with peptone or branched chain amino acids themselves. This is an evidence for a signal exchange between the two symbiotic partners mediated through the surrounding medium. The results of the proteome analysis were supplemented by transcriptomic studies of the epibiont in the associated and the free-living state (Wenter et al., 2010). Of the 328 differentially expressed genes, 19 genes were found to be up-regulated and are involved in amino acid synthesis while six genes of the amino acid pathways were down-regulated. The conclusion that nitrogen metabolism of the epibiont is stimulated in the symbiotic state is commensurate with the simultaneous up-regulation of the nifH, nifE, and nifB genes and with the prominent expression of the P-II nitrogen regulatory protein. The results of the proteome analyses indicates that (i) a signal exchange occurs between the central bacterium and the epibiont that controls the expression of symbiosis relevant genes and (ii) metabolic coupling between C. chlorochromatii and the central Betaproteobacterium may involve amino acids. Metabolic coupling has also been detected in the two-membered microbial consortium consisting of Anabaena sp. strain SSM-00 and Rhizobium sp. strain WH2K. Between the two species, nanoscale secondary ion mass spectrometry (nanoSIMS) analyses indicated an exchange of metabolites containing 13 C and 15 N fixed by the heterocysts of the filamentous cyanobacteria to the attached epibiont cells (Behrens et al., 2008). EVIDENCE FOR METABOLIC COUPLING The proteomic and transcriptomic evidence described in the preceding paragraphs point toward an exchange of metabolites between the two symbiotic partners of "C. aggregatum." Meanwhile, more direct evidence for a transfer of carbon between the bacterial partners of phototrophic consortia has been obtained. In a series of labeling experiments with 14 C, a rapid exchange of labeled carbon from the epibiont to the central bacterium was observed (Johannes Müller and Jörg Overmann, unpublished observations). External addition of several amino acids as well as 2-oxoglutarate to the growth medium inhibited this carbon exchange. Together with the observed excretion of photosynthetically fixed carbon by C. chlorochromatii CaD, these results suggest a transfer of newly synthesized small molecular weight organic matter to the central bacterium. Such a transfer may provide the central bacterium with a selective advantage in illuminated sulfidic environments where degradation of organic matter proceeds mainly through the anaerobic food chain and involves competition of chemoheterotrophs for organic carbon compounds. By transferring amino acids, the epibiont may not only support growth of the central bacterium with respect to carbon, but also to nitrogen and even sulfur. To date it has remained unclear whether the association in phototrophic consortia also offers an additional advantage for the green sulfur bacterial epibiont apart from the gain of motility and the resulting potential increase in sulfide supply. Extensive substrate utilization assays with C. chlorochromatii CaD revealed that only the addition of acetate and peptone stimulated the growth of the epibiont of "C. aggregatum" (Vogl et al., 2006). It remains to be tested whether transfer of organic carbon in this form occurs in the opposite direction from the central bacterium to the epibiont. Stable isotope signatures ( 13 C) of I. hospitalis and N. equitans were analyzed to investigate a possible carbon transfer between the two archaeal partners. Labeling patterns of Ignicoccus amino acids grown in the coculture as well as in pure culture were compared to those in the Nanoarchaeum amino acids. Therefore, amino acids were separated by chromatography and incorporation of 13 C at specific carbon positions was identified by NMR spectroscopy. The labeling patterns from all three cultures were exactly identical. In addition, genes involved in the de novo biosynthesis of amino acids are missing in the Nanoarchaeum genome. Based on this combined evidence, it was concluded that amino acids are transferred from the I. hospitalis host to the N. equitans cells (Jahn et al., 2008). In addition, cellular macromolecules seem to be exchanged between the partners in the archaeal consortia. In the latter, LC-MS analyses of membrane lipids in their intact polar forms showed very similar chemical patterns in both organisms with archaeol and caldarchaeol constituting the main core lipids. Furthermore, stable isotope labeling ( 13 C) yielded nearly identical results for the hydrocarbons derived from N. equitans and I. hospitalis. Those results, combined with a lack of genes for lipid biosynthesis in the genome of N. equitans led to the conclusion that lipids in the archaeal consortium are synthesized in I. hospitalis and transported to its partner organism (Jahn et al., 2004). Amino acids also seem to be of central importance for other symbioses. The deep sea tube worm Riftia pachyptila is dependent on arginine supplied by its bacterial endosymbionts (Minic and Hervé, 2003), whereas in legume-root nodules, amino acids are used as both an ammonium and a carbon shuttle (Lodwig et al., 2003). Interestingly, Rhizobia become symbiotic auxotrophs for branched chain amino acids after infecting the host plant due to a downregulation of the respective biosynthetic www.frontiersin.org pathways, making them dependent on the supply by the plant (Prell et al., 2009). MECHANISMS OF METABOLITE EXCHANGE The very structure of the phototrophic consortium in itself facilitates the putative transfer of compounds from one partner to the other since the direct cell-cell contact prevents a diffusion of compounds over larger distance and hence minimizes transfer time. Theoretically, the metabolic coupling of the two partners of the consortium "C. aggregatum" may be based on an unspecific excretion of substrates by the epibiont followed by the uptake via the central bacterium, or involve specific molecular structures and mechanisms of substrate exchange. Several observations indicate that the latter is the case in "C. aggregatum." The ultrastructure of the contact sites in "C. aggregatum" and in the archaeal consortia have been studied in detail using different electron microscopy approaches Wanner et al., 2008). In free-living epibionts, as well as in all other known green sulfur bacteria, chlorosomes are distributed evenly among the inner face of the cytoplasmic membrane. However, in the associated state, chlorosomes are absent in the green sulfur bacterial epibionts at the site of attachment to the central bacterium. Replacing the antenna structures, a 17 nmthick layered structure of yet unknown function has been discovered (Vogl et al., 2006;Wanner et al., 2008). Interestingly, treatment of the epibionts with the extracellular cross-linkers DTSSP and BS 3 revealed the branched chain amino acid ABCtransporter binding protein (compare The Regulatory Response Evoked by Symbiosis Involves Genes of the Nitrogen Metabolism) to be cross-linking with other proteins, indicating that it is localized at the cell surface or in the periplasm (Wenter et al., 2010). In contrast to the epibionts of the phototrophic consortium "Chlorochromatium aggregatum" that maintain a permanent cellcell contact to the central bacterium, Nanoarchaeum equitans has been observed in different states of attachment to Ignicoccus hospitalis. The surface structures of the two organisms may either be in direct contact or, alternatively, in close vicinity to each other. In the latter case, fibers bridging the gap between the cells are clearly visible. In "C. aggregatum," connections between the two partner bacteria stretching out from the central bacterium are more prominent than in the archaeal consortium. Periplasmic tubules (PT) are formed by the outer membrane and are in linear contact with the epibionts (Figure 3A). The PT are distributed over the entire cell surface ( Figure 3B) and reach 200 nm in length at the poles of the central bacterium. It had been speculated that the periplasmic tubules represent connections of a shared periplasmic space (Wanner et al., 2008). However, this could not be confirmed by fluorescence recovery after photobleaching (FRAP)-analyses (Johannes Müller and Jörg Overmann, unpublished observations). After staining of the consortia with calcein acetoxymethylester (calcein AM), only the epibionts but not the central bacterium could be detected by fluorescence microscopy (Figure 4A). Obviously, the central bacterium (arrows in Figure 4A) is lacking an esterase specific for cleaving calcein AM. This result in itself already contradicts the hypothesis of a combined periplasm because the highly fluorescent dye calcein should have diffused after its formation from the epibiont cells into the central bacterium. Furthermore, after subsequent bleaching of one of the epibiont cells (arrowheads in Figures 4B,C) using confocal microscopy, a recovery of fluorescence could not be detected in the bleached cell, which excludes the possibility of free diffusion between the epibiont cells through the interconnecting pili (Figures 4B,C). A similar experiment has been conducted with the filamentous cyanobacterium Anabaena cylindrica which is considered to be a truly multicellular prokaryote. Single calcein stained cells within an Anabaena filament fully recovered calcein fluorescence 12 s after bleaching. This effect was ascribed to intercellular channels allowing free diffusion of molecules from cytoplasm to cytoplasm (Mullineaux et al., 2008). Such a rather unspecific transfer is unlikely to occur across the contact site of the phototrophic consortium "C. aggregatum" where it must be much more substrate-specific. CONCLUSIONS While the different types of symbioses and syntrophic associations discussed in the preceding sections all provide an energetic advantage to one or both partners, only few of these associations reached the level of organizational complexity of the highly structured, permanent consortia. Thus, a permanent cell-cell contact is not mandatory in the case of syntrophic cultures in which depletion of substrates can lead to disaggregation of the associations (Peduzzi et al., 2003). The highly developed and obligate interaction in phototrophic consortia is likely to be related to the pronounced energy limitation in the low-light habitats and to the efficient and regulated exchange of metabolites. Phototrophic consortia harboring other prokaryotes than green sulfur bacteria have not been found so far, emphasizing the role of preadaptation of the green sulfur bacterial partner for the development of the symbiosis that was augmented by the gain of specific functions such as genes similar to virulence genes, periplasmic tubules for cell-cell contact or the intracellular sorting of chlorosomes in the epibionts. The gain of motility by the epibiont seems to constitute a selective advantage that led to the coevolution with a motile betaproteobacterium. The recent completion of the central bacterial genome sequence for "C. aggregatum" should help to determine whether and which additional preadaptations of the betaproteobacterium were essential for establishing this symbiosis.
v3-fos-license
2023-04-02T15:30:16.987Z
2023-03-31T00:00:00.000
257895364
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/15/7/6050/pdf?version=1680249546", "pdf_hash": "a571cacf248149110b4c4d920d798e067a9ce44b", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41636", "s2fieldsofstudy": [ "Economics", "Sociology", "Computer Science" ], "sha1": "e8cbbbb069a7f206f4a36392ea90d58d7300bf41", "year": 2023 }
pes2o/s2orc
How Digital Skills Affect Rural Labor Employment Choices? Evidence from Rural China : Expanding employment channels for rural households is a crucial means of enhancing the income of rural residents and enhancing the quality of rural employment. This study examines the impact of digital skills on rural laborers’ employment choices and explores the underlying mechanisms by using data from the China Family Panel Studies (CFPS) spanning 2014–2018. By employing various models, including the Probit, IV, mediated effects, and propensity-score-matching methods, the study reveals that digital skills have a significant impact on rural laborers’ employment choices. Specifically, digital skills increase rural labor’s employment opportunities in nonfarm and employed employment while reducing the proportion of informal employment. Additionally, the analysis indicates that the main channels through which digital skills influence rural labor’s employment choices are human and social capital. A heterogeneity analysis further reveals that work-study and social-entertainment skills have a more significant effect on rural laborers’ nonfarm and employed employment opportunities while inhibiting informal employment. Hence, to enhance the quality of future rural employment, the government must encourage rural workers to enhance their digital literacy and digital application skills while improving digital infrastructure. Introduction Employment is the biggest livelihood factor and the most basic support for economic development.Widening the employment channels of the rural labor force is an important way to increase income sources and improve living standards [1].However, in many low-and middle-income countries, labor force retention in rural areas is prominent, and frictional unemployment and structural conflicts have increased [2].The employment problem of unemployed and low-skilled people in rural areas has gradually become prominent owing to their lack of job skills.To improve the employment situation of rural residents, the Chinese government has taken a series of feasible measures.For example, it has promulgated employment-related policies and plans such as the Guidance on Stabilizing and Expanding Employment in the Development of the Digital Economy and the Outline of the Digital Countryside Development Strategy, which provide a sound policy environment for the stable development of employment.Although these measures have achieved certain results, the employment situation of China's rural labor force is still unsatisfactory [1].Therefore, actively transforming the traditional employment model and building a more diversified employment choice model has become the long-term driving force for sustainable employment for China's rural labor force. With the improvement of digital infrastructure and the popularization of intelligent devices, digital skills play an increasingly significant role in workers' access to information.The improvement of digital infrastructure has greatly changed the lifestyle and communication behavior of rural residents and has a certain impact on their individual employment choices [3][4][5][6][7][8].According to data released by the China Internet Network Information Technology Center, the Internet penetration rate in rural China reached 58.8% in 2022, and 160 million rural households have broadband access.The length of long-distance optical cable lines is also growing.Digitally skilled workers tend to have higher education levels, digital literacy, and job-searching ability [9,10].In the US labor market data, Atasoy [2] found that the acquisition of digital skills can increase the employment rate by about 1.8 percentage points, with a greater impact in rural and remote areas.At the same time, Manuela [11] analyzed the employment situation in Albania's rural areas and found that digital skills are conducive to the nonagricultural employment of rural labor, bring efficient and low-cost learning, and help improve the human capital level of labor.Digital skills can also promote entrepreneurial opportunities for rural laborers and improve the quality of life of rural residents [12][13][14].Job searches based on digital skills have positive effects on the employment of rural laborers and can alleviate employment difficulties and frictional unemployment [15,16].Agrawal et al. [17] claimed that with the help of digital technology, difficulties in obtaining employment information can be alleviated to a certain extent.Digital technology provides opportunities for social and economic development and helps to alleviate employment difficulties [18,19].Adesugba et al. [20] found that more and more individual workers in Nigeria would search for employment information through the Internet.Digital skills can improve the imbalance between supply and demand in the labor market and narrow the digital divide [21][22][23].Therefore, improving rural digital infrastructure and improving rural residents' digital literacy and digital application ability have become key measures to promote rural labor employment. The positive impact of digital skills in rural labor force employment has been confirmed by several studies [2,4,9,11,16].However, the specific role of digital skills on the employment choices of the rural labor force remains unclear and therefore needs to be explored in more depth.The potential contributions of this paper are as follows: (1) This paper systematically analyzes the impact of digital skills on the employment choices of rural laborers in China and provides new perspectives for promoting the diversified employment of rural laborers.(2) To deal with possible endogeneity problems, this paper adopts PSM, variable replacement, and the IV-Probit two-step method for robustness testing, which provides a reference research method for similar studies.(3) This paper also discusses the potential mechanism of social capital and human capital that digital skills affect in rural labor's employment choices. The main purpose of this paper is to explore the impact of digital skills on rural labor force employment choices, including the importance, role, and impact mechanisms of digital skills in rural labor force employment choices.Specifically, this paper aims (1) to explore the role of digital skills in offering employment choices for rural laborers and the impact of digital skills on enhancing employment opportunities for rural laborers; (2) to elucidate the mechanisms of the role of digital skills in influencing the employment choices of rural laborers; and (3) to analyze the facilitative effects of digital skill uses on rural labor force employment choices and the differences in the effects of employment choices on different groups of the rural labor force, including the effects of different genders and regions.Through these studies, we can gain a deeper understanding of the role of digital skills on the employment decisions of rural laborers and provide more-powerful policy suggestions for promoting the employment of rural laborers. Analysis of Digital Skills and Employment Choices for Rural Laborers The relationship between digital skills and employment has been receiving attention from the academic community.The acquisition of digital skills can broaden the access of rural laborers to external information, improve their efficiency in obtaining employment information, and facilitate the reasonable matching of rural laborers with employment positions, thus increasing the probability of employment [24,25].In addition, the acquisition of digital skills can reduce the cost of information acquisition and alleviate technical and knowledge-based economic poverty in rural areas, which has a positive impact on the development of the digital economy in rural areas.The mastery of digital skills can optimize the matching of rural labor with labor market information and match high-quality employment resources to cities and regions with low information levels, thus alleviating the current uneven distribution of employment resources [26].The first hypothesis is formulated as follows: Hypothesis 1. Digital skills can facilitate the employment choices of rural labor. 2.1.2.Analysis of Social Capital, Digital Skills, and Employment Choices for Rural Laborers Studies on the relationship between individual social capital and employment have shown that the individuals' possession of rich social capital is one of the key factors affecting their employment [27].The acquisition of digital skills can strengthen the social network communication of rural laborers, broaden the scope and depth of social relationships, and contribute to the accumulation of social capital [23].This social capital can help rural workers to obtain employment information and can provide employment opportunities and resources, thus improving their employment quality and employment probability [4,20,22].Empirical studies have shown that in rural South Africa, having an extensive social network can significantly increase the probability of employment among rural residents [21].Digital skills, as a bridge for information exchange, can not only expand an individual's social network but also help maintain the long-term value and stability of social capital [28,29].Specifically, the acquisition of digital skills can help farmers gain access to more employment information and opportunities by expanding private social networks and engaging in online communication, thus increasing their employment options and development opportunities in the nonfarm sector [27].On the basis of the above findings, the following hypothesis is proposed: Hypothesis 2. Social capital mediates the effect of digital skills on rural labor's off-farm employment, employed employment, and informal employment choices. Analysis of Human Capital, Digital Skills, and Employment Choices for Rural Laborers With the rapid development of digital technology, digital networks have become a new learning platform that overcomes the limitations of time and geographical learning resources and provides more-convenient conditions for achieving learning resource sharing and online communication [16,17].The mastery of digital skills can enable rural laborers to access learning resources such as vocational skills training at a lower cost without leaving home, thus improving their skills and cognitive abilities [18].Vocational skills training can enhance the human capital level of rural workers, and having a higher level of human capital will increase their labor skill level and productivity, thus making it easier to find employment opportunities [19].In addition, the level of human capital supported by digital skills plays an important role in the employment decisions of the rural labor force [30].On the basis of the above findings, the following hypothesis is proposed: Hypothesis 3. Human capital mediates the effect of digital skills on rural labor's off-farm employment, employed employment, and informal employment choices. Data Sources This paper is based on data from the China Family Panel Studies (CFPS) from 2014 to 2018.The sample covers 25 provinces, municipalities, and autonomous regions, which is widely representative.Given that the study population is the rural labor force, aged 19 to 65, we keep only the data of rural households.After removing some samples with apparently abnormal data or missing main variables, a final sample of 5598 valid samples was obtained. Descriptions of Variables The variables used in this paper are shown in Table 1.( 1) Dependent Variable The dependent variable studied in this paper is the employment status of rural labor.Referring to the research method of Song and He [31], we classify them into three categories, according to the employment sector: whether they are in nonfarm employment, employed employment, or informal employment. (2) Independent Variable The core independent variable in this paper is digital skills, i.e., rural workers take a value of 1 in the current period relative to their latest mastery of digital skills in the previous period and 0 otherwise.On this basis, this paper draws on Mou et al.'s study [8] to further subdivide digital skills into two dummy variables, namely work-based learning skills and social-entertainment skills, to explore the possible differential impact of different types of digital skills. (3) Control Variables This paper controls for exogenous factors, such as gender, age, ethnicity, and family size, that affect the rural labor force employed in the study of He and Song [32].In addition, this paper controls for some endogenous factors affecting employment, including marital status, political affiliation, health status, intelligence level, years of education, and net household income. (4) Mediator Variable Human capital is one of the important factors affecting the entry of rural labor into the labor market [33].Most scholars use workers' education level and average years of education as proxy variables of human capital [34].However, given the dynamic nature of human capital and the subjective initiative of workers, this paper uses the indexes proposed by Qi and Chu [29] to measure human capital, including participation in nonacademic education and whether people obtain help from others to find jobs. ( 5) Statistical Description of the Variables As can be seen from Table 1, the proportion of nonagricultural employment for rural labor with digital skills is 28.2%, higher than that of the group without digital skills (16.9%).In addition, the proportion of rural workers with digital skills in employed employment was 35.7%, which was significantly higher than that of those without digital skills (18.7%).However, the proportion of rural workers with digital skills in informal employment was 91.7%, lower than the 97.3% of those without digital skills.These descriptive statistics suggest that mastering digital skills may influence the employment choices of rural workers.In addition, there are some differences between the characteristics of the two groups, those with and those without digital skills.For example, slightly fewer men than women have digital skills; moreover, the average age of those with digital skills is about 45, lower than the average age of those without digital skills (52).It should be noted that samples from those younger than 19 years old and those older than 65 years old are excluded from this study because most of these samples are not in the range of job market choices [33]. Model Selection (1) Baseline Regression Model The dependent variable in this paper is a binary dummy variable, and traditional regression models often have problems with predicted values exceeding the range of the dummy variable when dealing with this type of variable.Additionally, the Probit model is widely used to deal with such problems so that the regression results are in the normal range.Therefore, following Yin et al.'s (2019) study [35], we choose the Probit model to analyze the impact of digital skills on the employment choices of rural laborers.The baseline regression model is set as follows: The explanatory variable employs three employment choice variables, taking a value of 1 for nonfarm employment, employed employment, and informal employment and 0 for agricultural jobs, self-employment, and formal employment.The core explanatory variable Digitalskill is a dummy variable, i.e., rural workers take 1 if they have newly acquired digital skills in the current period relative to the previous period; otherwise, they take 0. In addition, Digitalskill is further subdivided into two dummy variables, namely workstudy skills and social-entertainment skills, to discuss the possible differentiated effects of different digital skill types.Here, i denotes the rural labor force, and t represents the year.X j is a control variable, including the year and individual, their household, and regional characteristics that may affect rural labor force employment.β indicates the marginal effect of digital skill mastery on rural labor force employment choices. While digital skills influence the employment choices of rural laborers, the employment choices of rural laborers themselves may influence whether individuals possess digital skills [36].Those with digital skills may have higher education, social status, and other personal characteristics that affect their employment.To effectively solve the possible endogeneity problem between digital skills and rural labor employment, this paper uses the propensity-score-matching (PSM) method proposed by Rosenbaum and Rubin [37], which can verify the relationship between digital skills and rural labor employment. (2) Mediating Effect Model In the transmission mechanism analysis, digital skills may influence the employment choices of rural labor through the human capital channel and the social capital channel.In this paper, a standardized mediating effects model is used for further empirical testing to analyze the indirect effect of the explanatory variable (X) on the explanatory variable (Y) through the mediating variable (M), by using the following equations: The mediating effect model was constructed with labor force employment choice (employment) as the explanatory variable Y, human capital and social capital as the mediating variable M, and digital skills as the explanatory variable X. Baseline Regression Results The results in Table 2 report the marginal effects of digital skills on the nonfarm employment, employed employment, and informal employment of the rural labor force by introducing exogenous and endogenous control variables.The results in columns ( 1) and (2) show that after the exogenous and endogenous variables were controlled for, the probability of the nonfarm employment of rural laborers with digital skills is 9.5 percentage points higher than that of laborers without digital skills, and digital skills have a significant positive effect on the nonfarm employment of rural laborers.The results in columns (3) and (4) show the effect of digital skills on the employed employment of rural laborers; overall, the marginal effect of digital skills is significantly positive; and digital skills make rural laborers 4.6% more likely to be employed.The results in columns ( 5) and (6) show that digital skills reduce the probability that the rural labor force will be in informal employment by 2.2%. Propensity-Score-Matching Estimation The study in this paper focuses on the influence of digital skills on the employment choice of rural laborers, taking into account that the employment choice itself may influence the self-selection behavior of whether individuals acquire digital skills.For this reason, this paper adopts the propensity-score-matching method for processing and chooses three methods: proximity matching, radius matching, and kernel matching.The results of the balance test showed that the proportion of deviations between groups of the matched sample variables were all less than 10%, and there were no significant differences between the treatment and control groups, which satisfied the balance hypothesis.In Table 3, the ATT values in the three matching results ranged from 0.0678 to 0.0790 for nonfarm employment and from 0.0424 to 0.0448 for employed employment and passed the 1% significance test. Regression Estimation with Variable Substitution To test the robustness of the results of the baseline model, this paper uses the importance of Internet information to replace digital skills as an explanatory variable [38].The higher the importance of an individual's use of the Internet as an information channel, the greater the probability of acquiring digital skills and the frequency of using the corresponding skills, and there is a strong correlation between the two.According to the results in Table 4, after replacing digital skills with the importance of Internet information as an explanatory variable, the probability of both nonfarm employment and employed employment significantly increases and is positive at the 1% and 5% statistical levels, respectively, controlling for exogenous and endogenous variables.Meanwhile, informal employment is significantly negative at the 1% statistical level. IV Estimation To deal with the possible endogenous problems in the Probit model, this paper selects provincial Internet penetration rate as the instrumental variable of digital skills [39].Studies have shown a correlation between digital skills and provincial Internet penetration.To further ensure the effectiveness of instrumental variables, a weak instrumental variable test was conducted in this study.The results showed that statistical value F in the first stage of Table 5 far exceeded the critical value of 10%, indicating that instrumental variables were not weak.At the same time, the regression results of the first stage in Table 5 show that digital skills have a significant impact on the nonagricultural employment, employed employment, and informal employment of the rural labor force, with the provincial Internet penetration rate as the instrumental variable under the condition that the control variables remain unchanged, and the result significance level is 5%. Mechanism Test Results By looking at Table 6, it can be observed that digital skills have a significant effect on human capital and social capital at the 1% level.Specifically, the coefficient of human capital is significantly positive and has a significant positive effect on both the off-farm and employed employment of the rural labor force, but it has a significant negative effect on informal employment.In contrast, the coefficient of the social capital channel shows a significant negative effect. Heterogeneity Test Results The results of the baseline regression and robustness tests verify that digital skills have a significant promotion effect on the nonfarm employment and employed employment of rural laborers and a significant adverse impact on informal work.Despite controlling for the control variables at the individual, household, and regional levels, the individuals selected for the questionnaire survey still need to be completely homogeneous, and their promotion effects have significant variability across groups.Therefore, this paper discusses the heterogeneity of the influence of digital skills on employment choice from the perspectives of different digital skills, gender, age, and different regions. Heterogeneity in the Use of Digital Skills Table 7 reports the estimated results of the employment impact of the rural labor force's acquisition of different digital skills.The results show that work-study skills and social-entertainment skills significantly affect nonfarm employment at the 1% level, where work-study skills make the largest positive contribution to nonfarm employment.Workbased learning skills significantly affect employed employment at the 5% level, while social recreational skills significantly affect employed employment at the 10% level.It is noteworthy that work-based learning skills and social recreation skills, however, significantly reduce the probability of informal employment. Gender Heterogeneity A regression analysis was performed by gender grouping, controlling for individual, family, region, and year variables.Table 8 shows that at the significance level of 1%, digital skills have a significant impact on male and female nonfarm employment, and the probability of nonfarm employment of rural women is higher than that of men.However, when it comes to employed employment, digital skills have a significant effect only on women, not on men.For informal employment, the influence coefficient of digital skills on the male sample is −0.295, which is significant at the 1% significance level, but not on the female sample. Regional Heterogeneity In this paper, the overall samples were divided into eastern, central, and western samples according to their respective regions, and a regression analysis was conducted.According to the results in Table 9, digital skills have a significant impact on the nonfarm employment of the rural labor force in the eastern, central, and western regions, at the 1% level of significance.In addition, digital skills also have significant effects on the employed employment of the labor force in the eastern and western regions, but not in the central region.There is also a negative effect of digital skills on the rural labor force in the eastern region at the 1% statistical level, while there is an inhibitory effect in the central and western regions. Discussion After using the Probit model regression, robustness test, and mediation effect test, this paper finds that digital skills have important effects on the employment choice of rural labor in China.Specifically, digital skills significantly improve the possibility of nonagricultural employment and employed employment for China's rural labor force while also helping to promote the transition of the rural labor force from informal to formal employment.Further research finds that digital skills have significant effects on the nonagricultural employment and employed employment of China's rural labor force through the improvement of human capital but have inhibitory effects on informal employment.In addition, this paper also discusses the mediating effect of social capital on the nonagricultural employment and employed employment of rural labor, but no significant results are found.In terms of heterogeneity, research shows that work-learning skills and social recreation skills have a significant impact on the nonagricultural employment and employed employment of rural labor, but their contribution to informal employment is not obvious.Digital skills also help increase rural women's chances of achieving off-farm and employed employment.Finally, the study also found that there may be regional differences in the impact of digital skills on rural labor employment in different regions. Digital skills have significant effects on the nonfarm and employed employment of rural laborers in China, which is consistent with the findings of Atasoy [2], Zhao and Xiang [4], Mao et al. [10], Manuela [11], Badea et al. [16], and Campos et al. [26].In addition, our study supports the findings of Song and He [31] on the positive effect of digital skills on the formal employment of the rural labor force in China.Digital skills increase the probability of nonfarm and employed employment while decreasing the probability of informal employment.Our findings are similar to those of Kuhn and Mansour [13], Agrawal et al. [17], Adesugba et al. [20], and Liu and Diao [27], which show that digital skills contribute most to human-capital-mediated effects on nonfarm and employed employment.However, digital skills have a dampening effect on informal employment, which is where our study differs from previous studies.Our study shows that good digital literacy and occupational skills improve the ability of rural laborers to find and choose employment, giving them a competitive advantage in the labor market [10].On the other hand, there is no significant mediating effect of social capital in the employment decisions of rural labor.This suggests that digital skills can increase the probability of employment by reducing the need for help from others, decreasing the level of dependence on others, and facilitating active job search opportunities [29].In addition, the study found that work-study skills and social-entertainment skills contributed the most to nonfarm and employed employment among the rural labor force, which is consistent with the findings of Qi and Chu [29] and He and Song [32].However, their contribution to informal employment is not significant.These results suggest that individuals with work-based learning skills are relatively scarce in resource-limited rural areas and play an irreplaceable role in promoting nonfarm and employed employment.At the same time, the positive impact of online learning, socializing, and entertainment on informal employment is offset by their negative impact, suggesting that the contribution of these skills to informal employment is insignificant.Our study further reveals that digital skills have a more significant impact on nonfarm and employed employment among rural women compared with that among men, which is consistent with the findings of Dettling [25] and Lin et al. [40].This may be because rural male laborers need a stable and continuous income to support their families and thus prefer formal employment with stable employment relationships and social security.In contrast, rural female laborers are more likely to choose flexible employment activities to take care of their families.Finally, our study finds that the nonfarm employment effect of digital skills is significantly higher in eastern and central China than in western China, while the employed employment effect is higher in western China than in eastern and central China.This may be due to the more widespread digital infrastructure and technology adoption in the eastern and central regions than in the western region, such that digital skills generate more nonfarm employment opportunities for rural laborers in eastern and central China. Digital skills play an active role in promoting diverse employment in the rural labor force and realizing sustainable employment.First of all, the mastery of digital skills can provide more job opportunities for rural laborers, expand employment channels, and improve their employment quality.The mastery of digital skills can also help rural workers better adapt to the needs of the labor market and improve their competitiveness, thus achieving sustainable employment.Second, the government should strengthen the construction of rural digital infrastructure, guide the rural labor force to use the Internet for job hunting and vocational training, stimulate their enthusiasm to learn digital skills, and improve the application of digital skills in the labor market.In this way, the rural workforce can better master digital skills, better adapt to the job market in the digital age, and achieve employment sustainability.Third, mastering digital skills can help narrow the regional digital development gap and promote coordinated regional development.The mastery of digital skills can improve the competitiveness of the rural labor force in the digital economy and promote the development of the digital economy in rural areas, thus narrowing the gap between urban and rural digital development and promoting the coordinated development of regions. However, our study has some limitations.First, because of data limitations, we used mainly the latest information on digital skills acquired by the rural labor force relative to the previous period, without exploring the depth and quality of their digital skills.Therefore, it is necessary to further expand the definition of digital skills and find relevant data to explore the impact of digital skills on the employment decisions of the rural labor force.Second, we did not deeply investigate the path of digital skills' impact on different age groups.Because rural laborers of different ages have different levels of digital skills, this may affect employment decisions in different ways.Therefore, further quantitative studies in this area are needed in the future. Conclusions With the development of digital technology, digital skills have continuously penetrated various fields of people's lives and become an important force to improve people's quality of life.This study aims to explore the impact of digital skills on the employment choices of the rural labor force in China and its mechanism of action.By analyzing the China Family Panel Studies data from 2014 to 2018, this study finds that digital skills have positive effects on rural laborers' off-farm and employed employment choices, and the findings are verified in the tests of multiple empirical methods.Digital skills can significantly affect rural labor's nonfarm and employed employment by increasing the level of human capital, while the effect on informal employment is not significant.In addition, this study also finds that work-study skills and social-entertainment skills can increase the probability that rural laborers will obtain nonfarm and employed employment, but they do not significantly contribute to informal employment.Digital skills also promote nonfarm employment among rural female laborers and also contribute to the transfer of rural male laborers from informal to formal employment.These findings provide useful references for the development of digital skills and employment policies for the rural labor force. Table 1 . Definitions of variables and descriptive statistical results. Note: Calculated on the basis of CFPS data from 2014 to 2018. Table 2 . Impact of digital skills on different types of employment choices for the rural labor force. Table 3 . Results of propensity score matching. Table 4 . Results of variable substitution regression. Table 5 . Regression results of instrumental variables. Table 6 . The mediating effect. Table 7 . Impact of different digital skills on the employment of rural labor force. Table 8 . Impact of digital skills on the employment of different genders in the workforce. Table 9 . Impact of digital skills on rural labor force employment in different regions.
v3-fos-license
2019-04-03T13:08:15.609Z
2018-11-05T00:00:00.000
91695084
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://arpgweb.com/pdf-files/jac4(11)152-156.pdf", "pdf_hash": "cc1d8ce48382656176d3d90c800098a6fd318125", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41639", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "e589e7442cac315f6190b24150f1778d14bc28ab", "year": 2018 }
pes2o/s2orc
Characterization and Starch Properties of a Waxy Mutant in Japonica Rice A rice waxy mutant M6 was generated from a japonica rice cultivar Kitaake through gamma irradiation. In this study, we characterized the mutant and analyzed its starch properties. The M6 with milky opaque kernels had lower seed length, width, and weight than wild type. The cavity in the center of starch granule might be responsible for waxy appearance of M6 mature kernels. Sequence analysis of granule-bound starch synthase I (GBSSI) gene showed that there was a 23 bp duplication inserted into the exon 2, generating one stop codon. No GBSSI protein was detected in the endosperm of M6 . The isolated starch showed similar ratio of short and long branch-chains of amylopectin between M6 and wild type, but the M6 starch had no amylose. Both the M6 and wild type had A-type starch, but the M6 starch exhibited higher relative crystallinity than wild type starch. Compared with wild type starch, the M6 starch had significantly high swelling power, gelatinization enthalpy and breakdown viscosity and low water solubility, gelatinization peak temperature, peak viscosity, hot viscosity, final viscosity and setback viscosity. The M6 starch had significantly lower resistance to amylase hydrolysis than wild type starch. isolated a rice waxy mutant the rice The characterization and starch properties of were investigated. The major of this reveal the molecular mechanism of and the structural and functional properties in order to mutant waxy rice breeding in the Introduction Rice (Oryza sativa L.) is one of the most important cereal crops and is the staple food for over half the world's population. Starch, the major storage carbohydrate in rice endosperm, consists of two distinct components: amylose and amylopectin. Amylose is a mixture of lightly branched and long-chain linear molecules, whereas amylopectin is a much larger molecule with a highly branched structure consisting of about 95% α-1,4 linkages and 4-5% α-1,6 linkages. According to the amylose content (AC), rice varieties are classified into waxy (0-5%), low AC (6-18%), intermediate AC (19-23%), or high AC (>23%) types [1]. Rice seeds with high amylose levels are usually associated with dry, hard, and poor glossy characteristics, which lower rice rating quality [2]. Therefore, control of the AC of starch is a major objective in rice breeding. Amylose in the rice endosperm is mainly synthesized by granule-bound starch synthase I (GBSSI), which is encoded by the Waxy (Wx) gene located on chromosome 6 [3]. At least six Wx alleles have so far been identified: Wx a , Wx b , Wx mq , Wx op , Wx hp , and wx. In nonwaxy rice varieties, Wx a and Wx b have been recognized as being distributed in the indica and japonica subspecies, respectively, and their expression levels are highly correlated with AC in the endosperm [4]. In low AC or waxy rice varieties, a mutation in the Wx gene drastically reduces synthesis of amylose. The leaky mutations in Wx alleles, such as Wx mq from Milky Queen, Wx op from the opaque endosperm mutants and Wx hp from Yunnan landraces, control the low AC trait [5][6][7]. Sequencing of Wx promoter and 5' noncoding regions from 22 Bangladeshi rice cultivars shows that three of them with low to very low amylose lack the G/T splice site mutation [8]. In addition, a wx mutation in the 25 tropical waxy rice varieties shows that exon 2 has a 23 bp duplication in the coding sequence [9]. In rice, AC is one of the key components influencing the eating and cooking quality (ECQ). AC is controlled mainly by the expression levels of Wx, the activity of GBSSI and the binding characteristic of GBSSI with starch granule [4,5]. Different mutations in Wx have different effects on starch properties. A tyrosine residue at position 224 of Wx correlates with the formation of extra-long amylopectin chains in cultivars carrying Wx a [10]. Using sitedirected mutagenesis, three amino acid substitutions of Wx transgenic rice lines show significant differences in GBSSI activities, AC and starch physicochemical properties [11]. An amino acid substitution in Wx hp allele reduces the binding of GBSSI to starch granule and AC, affecting the pasting property of starch in rice seeds [5]. Therefore, it is necessary to investigate the effects of different Wx mutations on the properties of starch. Waxy rice, also known as glutinous rice, is widely used for food products, such as glutinous rice crackers and glutinous rice wine in Thailand, Laos, and particularly in China. In Thailand, most of the waxy rice varieties are photoperiod sensitive landraces [9]. Being photoperiod sensitive landraces, waxy rice varieties are difficult to be popularized and applied in large areas. Compared with other rice varieties, the Kitaake variety has a life cycle of only nine weeks, four times a year, and is insensitive to photoperiod changes, which can greatly accelerate the functional genetic studies and new varieties breeding of rice [12]. In China rice market, a lot of waxy rice varieties are cultivated by 60 Co gamma irradiation, such as Funuo 101, Funuoyou 396, Guifunuo, Yangfunuo 1 and Yangfunuo 4. In this study, we isolated a rice waxy mutant from the japonica rice cultivar Kitaake through gamma irradiation. The characterization and starch properties of M6 were investigated. The major objective of this study was to reveal the molecular mechanism of mutation and the structural and functional properties of starch in order to use the mutant effectively in waxy rice breeding program in the future. Plant Materials The waxy mutant M6 was isolated from a 60 Co-irradiated mutant pool of japonica rice cultivar Kitaake. The Kitaake and M6 were grown in a paddy field at Yangzhou University during the natural growing season. Mature seeds were harvested and used to isolate starch. Scanning Electron Microscopy Seeds were randomly selected for phenotypic analysis. To obtain cross sections, seeds were mounted on aluminum specimen stubs with adhesive tabs, coated with gold, and observed under an environmental scanning electron microscope (Philips XL-30) at 5 kV. RNA Extraction and Sequence Analysis Total RNA were extracted from seedlings of wild type and M6 using an RNAprep pure Plant Kit (TIANGEN, Beijing). First strand complementary DNA (cDNA) was synthesized with oligo (dT18) based on a PrimeScript Reverse Transcriptase Kit (Takara, Japan). The wild type and M6 GBSSI cDNA sequence was cloned using primers 5'-ATGTCGGCTCTCACCACGTCCCA-3' and 5'-AGGAGAACGTGGCTGCTCCTTGA -3', and the PCR product was introduced into the pEASY-Blant Vector (Transgen, Beijing), and transformed into E. coli strain DH5α. The recombinant plasmid was sequenced with an ABI Prism 3730 XL DNA Analyzer (PE Applied Biosystems, USA). Protein Extraction and Western Blot Analysis Mature seed endosperms were ground to powder in liquid N 2 . The powder was then suspended in the extraction buffer consisting of 50 mM Tris/HCl, pH 8.0, 0.25 M sucrose, 2 mM DTT, 2 mM EDTA, and 1 mM phenylmethylsulphonyl fluoride. After incubation on ice for 1 h, the homogenate was centrifuged for 20 min at 14,000 g, and the supernatants were transferred to new centrifuge tubes. Proteins were resolved by SDS-PAGE and transferred electrophoretically to polyvinylidene difluoride membrane. The antibodies used were anti-GBSSI rabbit antibody (Immunogen, Wuhan, China) diluted 1:5000, and anti-HSP82 (Beijing Protein Innovation) diluted 1:10000, and horseradish peroxidase-linked secondary antibody (Beyotime, Shanghai, China) diluted 1:5000. Isolation of Starch from Brown Rice The brown rice was soaked in distilled water at 4 °C for 24 h and extensively ground in a mortar. The ground sample was filtered through five layers of cotton cloth and filtered with 100-, 200-, and 400-mesh sieves, successively. The sample was centrifuged at 4000 g for 5 min and washed with distilled water. Starch was washed three times with water and twice with anhydrous ethanol. Finally, samples were dried at 40 °C and ground through a 100-mesh sieve. Measurement of Starch Molecular Structure Apparent amylose content (AAC) of starch was determined following the iodine colorimetric method described by Wang, et al. [13]. The molecular weight distribution of starch was analyzed using an Agilent Technologies gelpermeation chromatography (GPC) 220 system according to the method described by Lin, et al. [14]. X-Ray Diffraction (XRD) Analysis of Starch Starch XRD patterns were obtained with an X-ray power diffractometer (D8, Bruker, Germany). All samples were treated in a desiccator with a saturated solution of NaCl to maintain a constant humidity (relative humidity=75%) for 7 days prior to XRD analysis. The relative crystallinity was determined as described by Wei, et al. [15]. Swelling Power and Water Solubility Determination of Starch Swelling power and water solubility of starch were determined according to the method of Konik-Rose, et al. [16] with some modifications. Starch samples mixed with water (2%, w/v) was put in a 2 mL centrifuge tube and heated in a water bath at 95 °C for 30 min with regular gentle shock. The sample was cooled to room temperature and centrifuged at 8000 g for 20 min. The swelling power was the weight ratio of precipitated gel to dry starch. Thermal Property Analysis of Starch Thermal properties of starch granules were investigated with a differential scanning calorimetry (DSC) (200-F3, NETZSCH, Germany). Three milligrams of starch was mixed with 9 μL of distilled water, and sealed in an aluminum pan. The sample was then heated from room temperature to 130 °C at a rate of 10 °C/min. Pasting Property Analysis of Starch The pasting properties of starch were evaluated with a rapid visco analyzer (RVA) (RVA-3D, Newport Scientific, Narrabeen, Australia). Two grams of starch was dispersed in 25 mL distilled water and subjected to gelatinization analysis. A programmed heating and cooling cycle was used, where the sample was held at 50 °C for 1 min, heated to 95 °C at a rate of 12 °C/min, maintained at 95 °C for 2.5 min, cooled to 50 °C at a rate of 12 °C/min, and then held at 50 °C for 1.4 min. Enzyme Hydrolysis Analysis of Starch The starch was hydrolyzed by porcine pancreatic α-amylase (PPA) and Aspergillus niger amyloglucosidase (AAG). For PPA hydrolysis, ten milligrams of starch was suspended in 2 mL of enzyme solution (0.1 M phosphate sodium buffer, pH 6.9, 25 mM NaCl, 5 mM CaCl 2 , 0.02% NaN 3 , 50 U PPA (Sigma A3176)). For AAG hydrolysis, ten milligrams of starch was suspended in 2 mL of enzyme solution (0.05 M acetate buffer, pH 4.5, 5 U AAG (Sigma A7095)). The hydrolyses of PPA and AAG were conducted in a shaking water bath with continuous shaking (100 rpm) at 37 and 55 °C, respectively. After hydrolysis, starch slurry was quickly centrifuged (5000 g) for 5 min. The soluble carbohydrate in the supernatant was determined to quantify the hydrolysis degree using the anthrone-H 2 SO 4 method. Statistical Analysis For sample characterization, at least three replicate measurements were performed. All data represent the means ± standard deviation (n=3). The results were analyzed using the Student's t-test to examine differences. Results with a corresponding probability value of p<0.01 were considered to be statistically significant. Phenotypic Characterization of M6 Numerous mutants with defective endosperm were screened from the 60 Co-induced mutant library of japonica rice cultivar Kitaake and an opaque-kernel mutant was isolated and named M6. Throughout the vegetative growth stage, the mutant plants displayed no significant differences from the wild type plants. The mature grains of M6 were phenotypically similar to its wild type (Fig. 1A), while the brown rice of M6 was opaque and presented a milkywhite appearance (Fig. 1B-D). Iodine staining is a sensitive and convenient method for the detection of amylose in various tissues. When the seeds of M6 were cut transversely and stained with an iodine solution, a typical reddish color of waxy starch was revealed in the endosperm (Fig. 1C-F;) [17]. Scanning electron microscopy analysis of transverse sections indicated that the compound starch granules in both wild type and M6 endosperm cells were irregularly polyhedral and densely packed, while some cavities were only observed in the center of the starch granule in M6 (Fig. 1G, H). This phenotype in M6 starch granule was similar to the low amylose materials such as Y268F and E410D [11]. Seed size and weight measurements showed that seed thickness was largely comparable between the wild type and M6, but the seed length, seed width and thousand seed weight were significantly reduced in the M6 (Fig. 1I-L). These results suggested that the mutation in M6 might affect the amylose synthesis in endosperm. Molecular Analysis of M6 Mutation It is generally accepted that amylose synthesis is carried out by GBSS. Cereals contain two forms of GBSS, GBSSI and GBSSII, and GBSSI is responsible for amylose synthesis in storage tissues, such as endosperm [18]. Thus, the cDNA sequences of GBSSI were amplified by PCR and sequenced. Comparison of the sequences of wild type and the M6 revealed that the M6 allele carried 23 bp insertion (red font) in the exon 2 of GBSSI ( Fig. 2A). The 23 bp insertion was the duplication of the 23 bp (black underline) in front of them and generated a premature stop codon that allowed translation of only the first 57 amino acids of the GBSSI protein ( Fig. 2A, B). Thus, M6 was a loss-of-function mutant in GBSSI. Subsequently, we used western blot to analyze GBSSI protein in rice endosperm. As shown in Fig. 2C, the GBSSI antibody specially recognized the endogenous GBSSI protein band in wild type but not in the M6. In summary, these results demonstrated that the mutation of GBSSI was responsible for the M6 phenotypes. Structural Properties of M6 Starch The AAC of wild type and M6 starch as determined by iodine colorimetry is given in Table 1. The AAC of M6 starch was 1.2%, which was significantly lower than that of wild type starch (14.3%). The molecular weight distribution of starch as determined by GPC is shown in Fig. 3A. In general, the GPC chromatogram of isoamylasedebranched starch exhibits three peaks. The Peaks 1 and 2 represent the short-branch chains (A and short B chains) and long-branch chains (long B chains) of amylopectin, respectively, and the Peak 3 is amylose [19]. The M6 contained very low apparent amylose. Therefore, only two peaks (Peak 1 and 2) were detected in the GPC profile of M6 starch. Whereas for wild type starches, three peaks were detected in its GPC profiles due to the simultaneous existence of amylose and amylopectin (Fig. 3A). The percentage of the peak area in GPC profile can reflect the molecular weight distribution of starch, and the area ratio of Peak 1 and Peak 2 can be used as an index of the extent of amylopectin branching; the higher the ratio, the higher the branching degree [20]. As is shown in Table 1, M6 starch consisted of approximately 76.0% amylopectin short branch-chains and 24.0% amylopectin long branchchains, and had 3.2% amylopectin branching degree, while wild type starch contained 65.0% amylopectin short branch-chains, 21.5% amylopectin long branch-chains, and 13.5% amylose, and had 3.0% amylopectin branching degree. The XRD patterns of starches are presented in Fig. 3B. Both wild type and M6 starches had the characteristics of A-type crystallinity with strong reflection peaks at about 15°and 23°, and an unresolved doublet at around 17°and 18° 2θ (Fig. 3B), which was in conformity with the characteristics of normal cereal starches [21]. The relative crystallinity of M6 starch was 33.4%, which was higher than that of wild type starch (27.1%). This result was in accordance with that the relative crystallinity is negatively related to amylose [22]. Functional Properties of M6 Starch The swelling power and water solubility of wild type and M6 starches at 95°C are shown in Table 1. The M6 starch had higher swelling power and lower water solubility than wild type starch. The swelling power is a measure of the water-holding capacity of starch after being heated in water, cooled, and centrifuged, while the water solubility reflects the degree of dissolution during the starch swelling procedure [23]. Amylose is considered to contribute to the inhibition of water absorption and swelling of starch, whereas amylopectin tends to promote the process [24]. The M6 starch had higher amylopectin content and lower AC than wild type starch, which might contribute to its higher swelling power. The lower water solubility of M6 starch might be related to its lower AC, which could not leach out of the starch granules into the water. The thermal properties of starch samples were determined by DSC, and their thermograms and thermal parameters are given in Fig. 4A and Table 2. Compared with wild type starch, M6 starch exhibited lower gelatinization peak temperature and higher gelatinization enthalpy. The gelatinization peak temperature correlates positively with AC [25]. Starch with higher amylopectin contents can easily form more crystalline structures within granules; this type of starch also requires more energy to melt and uncoil the double helix structure during gelatinization [26]. In the present study, the M6 starch had a lower AC and higher crystallinity, and therefore required lower gelatinization temperature and more energy for gelatinization than wild type starch. The pasting properties of wild type and M6 starches measured by RVA are presented in Fig. 4B, and their pasting parameters are given in Table 2. Compared with the wild type starch, M6 starches had significantly different pasting properties because of their different molecular structure. The M6 starch had significantly lower peak viscosity, hot viscosity, final viscosity, setback viscosity, and peak time than wild type starch. This finding confirms the suggestion that rice starch with a low amylose is more prone to gelatinization, in agreement with previous studies on maize [27]. In contrast, as for the breakdown viscosity, M6 starch was significantly higher than wild type starch, possibly because amylose intertwines with amylopectin in wild type starch, which helped to maintain the integrity of the starch granules [28]. The time courses of PPA and AAG hydrolysis of starches are presented in Fig. 4C and Fig. 4D. A biphasic hydrolysis trend by PPA or AAG was observed in wild type and M6 starches with an initial rapid hydrolysis of the amorphous region followed by a decreased hydrolysis, which was in agreement with previous studies [13]. The hydrolysis rate of M6 starch by PPA or AAG was markedly higher than wild type starch. PPA hydrolyzes starch begins firstly from granule surface, and then it penetrates into granule interior and degrades starch from inside to outside, while AAG hydrolyses starch from the outer surface of the granule [29]. Susceptibility of starch to PPA and AAG attack is influenced by factors such as AC, amylose to amylopectin ratio, crystalline structure, granule integrity, porosity of granules, and structural inhomogeneities [30]. In the present study, the low AC and cavity structure in M6 starch led to that M6 starch was enzymatically hydrolyzed faster than the wild type starch. Conclusion In conclusion, the M6 was a loss-of-function mutant of GBSSI, which produced a waxy endosperm composed of amylose-free starch granules. Because of the differences in molecular structure between M6 and wild type, M6 starch contained higher relative crystallinity, swelling power, gelatinization enthalpy, and breakdown viscosity, but lower water solubility, gelatinization peak temperature, peak viscosity, hot viscosity, final viscosity, setback viscosity, and resistance to PPA and AAG hydrolysis than wild type starch. These findings could provide some practical information on the potential usefulness of the waxy mutant. To, onset temperature; Tp, peak temperature; Tc, conclusion temperature; ΔT, gelatinization temperature range (Tc-To); ΔH, gelatinization enthalpy; PV, peak viscosity; HV, hot viscosity; BV, breakdown viscosity (PV-TV); FV, final viscosity; SV, setback viscosity (FV-HV); PT, peak time. Data are shown as means ± standard deviation (n=3) and compared with wild type by Student's t-test (**P<0.01).
v3-fos-license
2024-03-27T15:51:31.898Z
2024-03-25T00:00:00.000
268700925
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2024.1377334/pdf?isPublishedV2=False", "pdf_hash": "2657c503951fece1b29d74dc5fe0f2b6133fb293", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41640", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Biology" ], "sha1": "7fc28b14e8034de6ace38954a53e73e96d3d915d", "year": 2024 }
pes2o/s2orc
Reconstruction of the genome-scale metabolic network model of Sinorhizobium fredii CCBAU45436 for free-living and symbiotic states Sinorhizobium fredii CCBAU45436 is an excellent rhizobium that plays an important role in agricultural production. However, there still needs more comprehensive understanding of the metabolic system of S. fredii CCBAU45436, which hinders its application in agriculture. Therefore, based on the first-generation metabolic model iCC541 we developed a new genome-scale metabolic model iAQY970, which contains 970 genes, 1,052 reactions, 942 metabolites and is scored 89% in the MEMOTE test. Cell growth phenotype predicted by iAQY970 is 81.7% consistent with the experimental data. The results of mapping the proteome data under free-living and symbiosis conditions to the model showed that the biomass production rate in the logarithmic phase was faster than that in the stable phase, and the nitrogen fixation efficiency of rhizobia parasitized in cultivated soybean was higher than that in wild-type soybean, which was consistent with the actual situation. In the symbiotic condition, there are 184 genes that would affect growth, of which 94 are essential; In the free-living condition, there are 143 genes that influence growth, of which 78 are essential. Among them, 86 of the 94 essential genes in the symbiotic condition were consistent with the prediction of iCC541, and 44 essential genes were confirmed by literature information; meanwhile, 30 genes were identified by DEG and 33 genes were identified by Geptop. In addition, we extracted four key nitrogen fixation modules from the model and predicted that sulfite reductase (EC 1.8.7.1) and nitrogenase (EC 1.18.6.1) as the target enzymes to enhance nitrogen fixation by MOMA, which provided a potential focus for strain optimization. Through the comprehensive metabolic model, we can better understand the metabolic capabilities of S. fredii CCBAU45436 and make full use of it in the future. Introduction Rhizobia are Gram-negative bacteria that can fix nitrogen from the air by parasitizing in plant nodules and transport it to host plants meanwhile the host provides carbon and other nutrients in return.Unlike rhizobia, free-living nitrogen-fixing bacteria such as Azotobacter chroococcum (Song et al., 2020) can autonomously fix nitrogen and have no dependency on the plant body.Among various biological nitrogen fixation systems, the legume-rhizobium symbiosis holds the highest efficiency and prominence.Symbiotic nitrogen fixation (SNF) by rhizobia is an effective way of biological nitrogen fixation, which is of great help to agricultural production (Yang et al., 2017;Lindstrom and Mousavi, 2020).To form SNF, rhizobia firstly need to recognize and produce specific signal molecules, which helps to form specialized tissue differed from host plants and finally become bacteroids (Timmers et al., 2000).By the symbiotic form, rhizobia release ammonium into the host plant in exchange for a supply of carbon and nutrients, which sustains a mutually beneficial relationship between plants and rhizobia (Prell and Poole, 2006).So far, a variety of rhizobia have been found like Bradyrhizobium diazoefficiens USDA110, Sinorhizobium meliloti 1,021 (Hwang et al., 2010), which have the potential to help increase production in agriculture, and S. fredii is a typical rhizobium widely used on alkaline-saline land (Han et al., 2009).S. fredii CCBAU45436 is the dominant strain among a dozen sublineages that can perform nitrogen fixation with some Chinese soybean cultivars efficiently (Munoz et al., 2016). To better understand the metabolic capabilities of bacteria, genome-scale metabolic network models (GSMMs) are widely used.GSMMs are mathematical models that have become crucial systems biology tools guiding metabolic engineering (Ye et al., 2022).At present, the GSMMs of model organisms such as Saccharomyces cerevisiae (Heavner and Price, 2015) and Escherichia coli (Weaver et al., 2014) are relatively comprehensive, but there are still many blanks for non-model organisms, which results in significant limitation for their utilization.There are some GSMMs of none-model organisms like iZM516 (Wu et al., 2023), iQY1018 (Yuan et al., 2023), and iZDZ767 (Zhang et al., 2023) which reveal potentially efficient ways to produce substances with economic benefits by microorganisms (Huang et al., 2023).In 2020, Contador and others developed iCC541, the first generation of the metabolic network model of S. fredii CCBAU45436 (Contador et al., 2020).However, as it contains inadequate genes and reactions, it is difficult to well reflect the whole metabolism of the rhizobium.On the one hand, it scored 70% in the latest version of the MEMOTE test, indicating its imperfect aspects, which need to be updated.In addition, it used old locus tag and EC number, which is inconvenient to understand and apply, so we urgently need a more complete metabolic network model to comprehensively reflect the metabolism of the rhizobium. The reconstructed metabolic network model, iAQY970, is a powerful tool for studying nitrogen-fixing bacteria, which can well reflect its metabolic status.The reconstruction was based on iCC541, on which new genes and reactions were added from databases like KEGG, ModelSEED and MetaCyc in line with the newest annotation information.The gap-filling process helped to enhance the global connectivity of the model by reducing the gaps caused by imperfect annotation information.Biolog phenotype microarray can directly measure an organism's physical performance in a specific environment (Shea et al., 2012).With the experiments, we can test the capacity of utilizing different carbon, nitrogen, sulfur sources and other nutrients of the bacteria, by which we can validate the simulation accuracy under the freeliving condition of the model.It has been reported that the correlation between transcriptome and proteome data of nitrogen-fix bacteria was not strong (Rehman et al., 2019), so we used proteome data as the second data.Integrating proteome data into the model, we established models under different conditions, which was a good method to predict the metabolic status of the rhizobium in specific circumstances.We further validated the simulation of the metabolic model by comparing the predicted results with experiment information published in previous literature.Finally, we predicted four possible modules and two target genes in iAQY970 with a great impact on the rhizobium during nitrogen-fixing period, which provided guidance to improve the efficiency of nitrogen fixation through strain modification in the future. Draft model for S. fredii CCBAU45436 The new model was constructed by manual method following the protocol including draft reconstruction, refinement of reconstruction, conversion of reconstruction into computable format, network evaluation, and data assembly and dissemination (Thiele and Palsson, 2010) (Figure 1).The iCC541 model was used as a template, and the metabolic network was reconstructed by adding reactions from KEGG (https://www.kegg.jp/kegg/)(Kanehisa and Goto, 2000), ModelSEED (https://modelseed.org/)(Seaver et al., 2021), and MetaCyc (https://metacyc.org/)(Caspi et al., 2020) databases.The genome information used in the reconstruction process was downloaded from NCBI (https://www.ncbi.nlm.nih.gov/datasets/genome/GCF_003100575.1), and iCC541 model was obtained from the GitHub website (https://github.com/cacontad/SfrediiScripts).The annotation information of genes, metabolites and reactions corresponding to other databases in the model was based on the KEGG and ModelSEED numbers.SBO annotation was added according to MEMOTE test (Lieven et al., 2020). Determination of the objective equation The objective equation often determines the optimization direction of the entire metabolic model.In this study, the model was applied to the simulation of the rhizobium both in the freeliving and symbiotic state, in which the biomass equation in the freeliving state referred to the iGD1575 model (DiCenzo et al., 2016), and the symbiosis equation in the symbiotic state referred to the iCC541 model (Contador et al., 2020).They were modified according to the model construction.In the free-living state, the biomass equation includes DNA, RNA, proteins, phosphatidylethanolamine, poly-3-hydroxybutyrate (PHB), glycogen, and putrescine.The chemical formula of biomass can be found in Supplementary Table S1.To decrease the potential bias from flux balance analysis (FBA) (Chan et al., 2017), the molecular weight (MW) of biomass was set to be 1 g/mmol.In silico media compositions were 1.44 mmol g −1 h −1 malate, 1.38 mmol g −1 h −1 succinate, 1.26 mmol g −1 h −1 oxygen, 2 mmol g −1 h −1 glutamate and 0.01 mmol g −1 h −1 inositol, which were confirmed by previous literature (Contador et al., 2020). Gap filling Due to the inaccurate gene annotation information and reaction libraries, the draft metabolic network model often has gaps that damage the flux balance of the model.Based on the information of the KEGG pathway, we manually filled the gaps in the draft metabolic network model.There are four methods we used to reduce the gaps.Many metabolites have aliases that could cause them to be represented by different symbols in the model; we checked the metabolites of the model to ensure there were no duplicated metabolites in the model.By unifying duplicated metabolites, we lessened dead metabolites and improved connectivity.Also, the wrong direction of the reactions resulted in obstruction of the circulation of metabolites.So, we adjusted the direction of the reaction properly when it was wrong.However, if the methods above did not work, new reactions had to be added to connect metabolites.Some reactions became blocked for lack of sink reactions and demand reactions.The blocked reactions were checked by the function "findBlockedReaction" in COBRA Toolbox (Heirendt et al., 2019).Workflow of metabolic network reconstruction iAQY970. Biolog phenotype microarray experiment and analysis Biolog phenotype microarray was a universal method to detect the absorption of different substances for various bacteria.The experiment was performed in Omnilog PM automatic system using GEN III MicroPlate ™ to test the utilization of 71 carbon sources.To check whether the model can get the result consistent with the experiment, the biomass reaction in the model was set as the objective equation to calculate the FBA value, and the tested substance was used as the only exogenous intake. Modeling with proteome data The proteome data was integrated into the metabolic network by E-Flux method (Colijn et al., 2009) to construct models of rhizobium in the logarithmic and stable phases in the free-living condition and parasitized in cultivated soybean and wild-type soybean under symbiotic condition.The proteome data was obtained from Rehmen's work (Rehman et al., 2019), and the average of the peptide values in three replicate experiments was adopted as the protein expression data.Protein expression data was mapped to the model by parsing the gene-protein-reaction (GPR) rules associated with each reaction.Flux variability analysis (FVA) was used to determine the boundary of each metabolic reaction.The lower bound of the reactions that were more than 0 and the upper bound of the reactions that were less than 0 were set to 0. The ultimate boundary was calculated by E-Flux method and the results are in Supplementary Table S2.Since Rehmen's work only had S. fredii CCBAU25509 proteome data about wild-type soybean accession W05, we employed the protein homology mapping method (Kellis et al., 2004) to match the S. fredii CCBAU25509 genes with the S. fredii CCBAU45436 genes, using "Compare Two Proteomes" tool of KBase (Arkin et al., 2018) with a minimum suboptimal best bidirectional hit (BBH) ratio of 90%.After obtaining the original results, we chose the best matching outcome in the bidirectional comparison results.The original results from KBase and the processed data can be found on GitHub and in Supplementary Table S3. Analysis of essential genes and reactions By identifying the essential genes and reactions, we can have deep insights into the life process of living organisms, which will be the key tool to control it freely.The "singleGeneDeletion" function in the COBRA Toolbox was used for the prediction of essential genes, and the "singleRxnDeletion" function was used for the prediction of essential reactions.Among them, genes and reactions of which the ratio of the growth rate of mutants to normals is less than or equal to 0.05 were considered as essential genes and essential reactions.Detailed information on the essential genes and essential reactions can be found in Supplementary Table S4.To ensure the reliability of the essential genes, we used the data from the literature and performed the function of BLASTP with genes in the Database of Essential Genes (DEG) (http://origin.tubic.org/deg/public/index.php/index)(Luo et al., 2021).Moreover, Geptop (http://guolab.whu.edu.cn/geptop/) was used to further predict the essentiality of genes, which was a powerful tool for prediction of essential genes of prokaryotic organisms (Wen et al., 2019).The genes were validated as essential genes if they were reported in literature or had identity score over 50% in BLASTP with genes in DEG or had an essentiality score over 0.24 predicted by Geptop.Detailed information of validated results can be found in Supplementary Table S5. Analysis tools Flux balance analysis (FBA) simulates optimal metabolism at steadystate, which is a useful tool for predicting flux distributions in genomescale metabolic models and models integrated with various omics data (Orth et al., 2010).Flux variability analysis (FVA) can help calculate the range of flux values that can be achieved for each reaction in the model.COBRA Toolbox v3.0 was mainly used for model construction, FBA, FVA analysis, and essentiality analysis of genes and reactions.COBRApy was mainly used to convert sbml format into json format (Ebrahim et al., 2013).Escher (https://escher.github.io/#/) was used for the mapping of metabolic network (King et al., 2015).All simulations were performed in MATLAB (R2023a) and the solver was GLPK. Reconstructed genome-scale metabolic network model iAQY970 and comparison with iCC541 In this research, based on the previous iCC541 model, a new genome-scale metabolic model iAQY970 for S. fredii CCBAU45436 was reconstructed according to the latest gene annotation information from NCBI, which contained 970 genes, 1,052 reactions and 942 metabolites including two cellular phases. The new model replaced old locus tag and updated the previous enzyme annotation information from BRENDA (Chang et al., 2021) database.In addition, the defects of wrong reaction direction and metabolite duplication in the previous model were further corrected, and the gaps were lessened by filling.Consequently, a more complete metabolic network map was drawn and can be found on GitHub. The detailed information of iAQY970 model can be found on the website (https://github.com/AnqiangYe/iAQY970/tree/main). The general features of iAQY970 and that compared with iCC541 are in Table 1.In total, iAQY970 contains 970 genes, accounting for 14.7% of the whole genes, while iCC541 contains 541 genes, accounting for only 8.2%.The model iAQY970 has higher gene coverage conducive to the more comprehensive simulation of the metabolic process.Moreover, iAQY970 can simulate the metabolic process both in free-living and symbiotic states but iCC541 considered only a symbiotic state.The two models were evaluated using a standardized genome-scale metabolic model test suite named MEMOTE which can assess model consistency including stoichiometric consistency, mass balance, charge According to the blocked reactions, we repaired the gaps in iCC541.The 167 blocked reactions in iCC541 were lessened to 39.In the pathway of Pyrimidine Metabolism, the pathway from 5methylcytosine to 3-amino-isobutyrate was blocked for lack of sink reactions and demand reactions, which were filled by transport reactions.No reaction can connect thymidine and thymine, so we added reaction of phosphate deoxy-alpha-Dribosyltransferase to connect the two metabolites.Since the rhizobium had nucleotide diphosphatase (EC 3.6.1.9),we added reaction of dTTP diphosphohydrolase to make the flux of dTTP circulate.The process is shown in Figure 2C. In symbiosis condition, the rhizobium used the malate and succinate from host plant as carbon source to keep flux work and produce symbiotic products in return.The content of the symbiotic product was obtained from iCC541, and the uptake of malate and succinate were set to 1.44 mmol g −1 h −1 and 1.38 mmol g −1 h −1 , respectively according to literature (Contador et al., 2020).FBA analysis was performed in the symbiotic case, and the flux of symbiotic product showed that iCC541 was 0.0487 mmol/gDW/h and iAQY970 reached 0.9528 mmol/gDW/h (Table 2).Also, the flux of energy and reducing factors were faster in iAQY970.We found that iCC541 cannot completely reach the maximum uptake limitation of malate and succinate which may be one important reason for the low flux of symbiotic products.Moreover, due to the addition of sink reaction the flux of symbiotic products improved. Phenotyping data analysis Biolog phenotype microarray was performed to validate the capacity of the model reflecting the utilization of different carbon sources.There were 43 carbon sources that can be used by S. fredii CCBAU45436 among 71 carbon sources.To compare the model and experiment result, we made each substance as the only intake carbon source and calculated the flux by FBA analysis using a minimal Comparison of the results between iAQY970 prediction and Biolog phenotype microarray.S6. Analysis of proteome data combined with metabolic network models In order to further analyze the accuracy of the model simulation in free-living and symbiotic states, proteome data analysis was used to verify the accuracy of the model.With the increasing data of multi-omics, it was popular to build a constraint-based metabolic model based on GSMM using transcriptome, proteome and thermodynamic data which can reflect specific conditions.Using proteome data instead of transcriptome data was a good method to avoid the deviation between actual protein expression and transcriptional level expression.The proteome data were from Rehmen's work (Rehman et al., 2019), and the number of peptides was adopted as the protein expression profile.The specific models were built from iAQY970 by E-Flux method.The number of proteins expressed by each gene was used to determine the lower bound and upper bound of every reaction.The distribution of agent flux in the logarithmic and stable phases and their maps can be found on GitHub (/Map), and the metabolic flux and its upper and lower bounds are shown in the Supplementary Table S7.The final biomass synthesis flux in the logarithmic phase was 0.4009 mmol/gDW/h and in the stable phase was 0.3340 mmol/gDW/h (Table 3), indicating that in the logarithmic phase the biomass synthesis was faster than in the stable phase, which was consistent with the fact that bacteria grew faster in logarithmic phase and accumulated more biomass during this period. Since there was currently only S. fredii CCBAU25509 proteome data on wild soybean accession W05 in symbiotic case, we used the protein homology matching method to map the gene expression of the S. fredii CCBAU45436 to the expression data of the S. fredii CCBAU25509.We selected the overlap with the highest hit rate in the genomes of the two species, and the mapping and hit rates of the two species were shown on GitHub.Similarly, we used the proteome data to build models under two conditions.The fluxes of each metabolic reaction in cultivated soybean and wild soybean can be found in Supplementary Table S7.In the case of symbiosis, the symbiotic reaction flux was 0.9528 mmol/ gDW/h in cultivated soybean and 0.5720 mmol/gDW/h in wild soybean (Table 3), indicating that the nitrogen-fixing activity of nitrogen-fixing bacteria was better in cultivated soybean, which was consistent with the actual situation.When parasitized in wild soybean, the rhizobia had more flux in fatty acid metabolism but the uptake of succinate reduced to zero indicating poor capacity of utilizing of carbon sources from host plant which might be the reason for less flux of symbiotic reaction. Analysis of essential genes and essential reactions According to the gene-protein-reaction (GPR) association, genes were classified as essential genes and non-essential genes determined by whether the reactions should carry nonzero flux to satisfy the objective equation.The prediction of essential genes in a free-living state showed that a total of 143 genes would affect growth, of which 78 were essential genes.The prediction of essential genes in the symbiotic state showed that a total of 184 genes would affect growth, of which 94 were essential genes. The Venn graph of predicted essential genes in iAQY970 and iCC541 model was shown in Figure 4A.There were 86 genes predicted both by iAQY970 and iCC541 in symbiotic condition, indicating great consistency in two models.As many researches involved data of symbiotic genes, they were used to verify our prediction.Among the essential genes, 44 genes were confirmed by literature information (Supplementary Table S5), and 30 genes were confirmed by BLASTP against Database of Essential Genes (DEG) (Supplementary Table S5), which determined essential genes when identity score was more than 50%.In addition, 33 genes were predicted as essential genes by Geptop, and the comparison of essential genes predicted by DEG and Geptop was shown in Figure 4B.In total, 27 genes were predicted as essential genes both by DEG and Geptop which showed good consistency of their outputs.Among all essential genes predicted by iAQY970, 74 genes were confirmed to be right based on the third part proof, retaining 18 genes without proof and 2 were contradictory with the literature.Nif genes played crucial role during SNF by encoding the nitrogenase complex and regulatory proteins involved in nitrogen fixation (Lindstrom and Mousavi, 2020).In iAQY970, three nif genes, AB395_RS31805, AB395_ RS31800 and AB395_RS31060 were added firstly and it was supposed to cause higher efficiency in nitrogen fixation after such supplement.In this model, genes in the TCA cycle, AB395_ RS29400 and AB395_RS18105 (acetyl-CoA carboxylase, EC 6.4.1.2),were classified as essential genes indicating that the TCA cycle had a great impact on energy which was important during nitrogen fixation. In previous report, the lack of ilv genes might result in no nodule formation which destroyed nitrogen fixation (Prell et al., 2009) so that not only ilvD (AB395_RS15445) but also ilvN (AB395_RS10030) was added in our model.As Figure 4A showed, there were 42 essential genes both in free-living and symbiotic condition, which indicated their great importance during whole life of the rhizobium.For example, although it was reported that PHB was necessary during the symbiotic period (Udvardi and Poole, 2013), it was still vital in free-living condition to keep the organism alive.In addition, pur family genes also had great importance in the life of the rhizobium, as purL (AB395_RS08020), purQ (AB395_ RS08000), purM (AB395_RS04210), purD (AB395_RS02495), purS (AB395_RS07990), purH (AB395_RS17830) appeared in result and were checked to be right.Genes of pur family mainly affected the synthesis of phosphoribosylformylglycinamidine which was the middle compound to connect Purine Metabolism with Thiamine Metabolism and Alanine, Aspartate and Glutamate metabolism. Essential reactions were predicted to appear particularly in different conditions.The essential reactions in different subsystems were shown in Figures 4C, D. Fatty Acid Biosynthesis showed quite difference in two conditions, which was important in symbiotic condition but weighed little in freeliving condition.Acyl carrier protein was prominent in forming symbiotic product but seemed not necessary in free-living condition.In contrast, Phenylalanine Tyrosine Tryptophan Biosynthesis occupied an important position in free-living condition, indicating adequate demand for aromatic amino acids.Moreover, Purine Metabolism and Glycolysis/ Gluconeogenesis were crucial in the life of the rhizobium, and during symbiotic period it showed more active phenomenon in exchange reaction. Modules of nitrogen fixation Rhizobia have the capacity of biological nitrogen fixation in the symbiotic state.However, since there were too many genes and reactions during nitrogen fixation, it was difficult to find core modules for nitrogen fixation.In order to mine the nitrogen fixation module, we analyzed four possible nitrogen fixation modules based on the new model according to the production of reductive ferredoxin because nitrogenase (EC 1.18.6.1) was the top 4), and we set the lower bound and upper bound of one reaction to 0 successively and calculate the NADH/ NADPH production, ATP production, Symbiotic production and Symbiotic nitrogen fixation which can be found in Table 5. There were four modules that mainly affected the production of reductive ferredoxin and subsequently influenced the function of nitrogenase: Module 1 (Nicotinate and Nicotinamide), Module 2 (Terpenoid Backbone Biosynthesis), Module 3 (Carbon Metabolism), Module 4 (Sulfur Metabolism).The fluxes of the four nitrogen fixation pathways can be found on GitHub (https://github.com/AnqiangYe/iAQY970/tree/main/Map/Flux).The symbiotic production flux of Module 1, Module 2 and Module 4 were 0.9528, and the symbiotic production flux of Module 3 was 0. 2745.According to the results, Module 3 had defect in nitrogen fixation and the reason had to do with the energy. To further find the key genes which can enhance the production of Fixed NH3, we adopted the minimization of metabolic adjustment (MOMA) algorithm (Segre et al., 2002) to identify potential genes (Table 6).The upper bound of Fixed NH3 exchange reaction was set to 0.001 mmol/gDW/h and reactions were increased by FBA simulation in each module.By overexpressing genes, ferredoxin NADP reductase (EC 1.18.1.2),xanthine dehydrogenase (EC 1.17.1.4)and sulfite reductase (EC 1.8.7.1) directly improve the production of reductive ferredoxin, of which sulfite reductase (EC 1.8.7.1) had the most significant impact on the production of Fixed NH3.In module 2, the overexpression of 4-hydroxy-3-methylbut-2-en-1-yl diphosphate reductase (EC 1.17.7.4) and isopentenyl-diphosphate DELTA-isomerase (EC 5.3.3.2) can increase the production of Isopentenyl diphosphate and Dimethylallyl diphosphate in Terpenoid Backbone Biosynthesis, which indirectly improves the production of reductive ferredoxin by the reaction of Isopentenyl-diphosphate.However, in simulation overexpression of Carbon monoxide dehydrogenase did not work and the entire metabolic network collapsed due to the inability to reach the overexpression value, resulting in a calculated result of 0. The enzymes sulfite reductase (EC 1.8.7.1) and nitrogenase (EC 1.18.6.1) were excellent target genes for enhancing nitrogen fixation since their overexpression led to huge improvement in simulation.According to the simulation, we should not only overexpress the nitrogenase (EC 1.18.6.1)directly but also improve the Sulfur Metabolism and Terpenoid Backbone Biosynthesis as well as silencing the genes that can consume reductive ferredoxin. Conclusion In general, we updated the metabolic network based on the first-generation iCC541 network model through a manual scheme, and designed the new model iAQY970.Compared with the previous model, iAQY970 had higher gene coverage and reflected a more complete metabolic process of S. fredii CCBAU45436.By integrating the proteome data, the metabolic network models under two specific conditions were constructed, and the results showed that the logarithmic phase of the metamorphosis was higher than the stable phase in the free-living case.And the nitrogen fixation reaction flux of parasitism in cultivated soybean was higher than that in wild soybean in the symbiotic case.These simulation results were consistent with the actual situation, which further verified the reliability of the metabolic network.Moreover, 94 essential genes were predicted by iAQY970, which were evaluated also by DEG and Geptop.Finally, four key nitrogen fixation modules were analyzed and some target genes were predicted by MOMA according to the iAQY970 model, which provided helpful guidance for the subsequent optimization of the nitrogen fixation process of the rhizobium.This more comprehensive metabolic model can help researchers better understand the metabolic process of the rhizobium and provide a powerful tool for its development and utilization. FIGURE 1 FIGURE 1 balance and metabolite connectivity and annotation of metabolite, reaction, gene and SBO.Comparisons of MEMOTE scores of the two models are shown in Figure2A.The MEMOTE version was 0.16.1 and the full reports of MEMOTE score can be found on GitHub.The scores indicate that iAQY970 has more complete annotation than iCC541.The number of reactions and the number of blocked reactions contained in each subsystem of either model are shown in Figure2B.The results show that iCC541 has 167 blocked reactions, accounting for 31% of the total, but the new model has 269 blocked reactions, accounting for 25.6% of the total.The total number of reactions increased, but the ratio of blocked reactions decreased.Meanwhile, some blocking reactions in the old model were repaired by the construction of the new model. FIGURE 2 (A) The result of MEMOTE test.(B) The number of reactions and the number of blocked reactions contained in each subsystem of either model.(C) Gap-filling in the pathway of Pyrimidine Metabolism, the red lines represent the blocked pathways in iCC541 model, the green lines represent the correct pathways in iCC541 model, and the blue lines represent the added pathways in iAQY970 model. medium.The results of simulation in differing from the experiment were used to modify the model reconstruction.Since we firstly only took succinate and malate into account, we added more exchange and transport reactions to ensure the model can intake the other carbon sources and gap-filling reactions were added to connect the carbon sources with the metabolites in the model.Finally, among 43 carbon sources can be used by S. fredii CCBAU45436 in the experiment, 30 of them can be utilized for simulation.Among the 13 carbon sources inconsistent with the experiment, 8 of them (Gentiobiose, D-Turanose, β-Methyl-D-Glucoside, D-Fucose, D-Glucuronic Acid, Glucuronamide, Methyl Pyruvate, β-Hydroxy-D, L Butyric Acid) were lack of information in databases like KEGG or ModelSEED indicating existing limitation in model reconstruction.Due to little knowledge of these carbon sources, it was hard to contain them into the genome-scale metabolic network model.After model modification, among 71 carbon sources, 58 carbon sources were consistent with the experiments (Figure3) which showed high accuracy of iAQY970 model (81.7%).Detailed results of the experiment can be found in Supplementary Table FIGURE 4 ( FIGURE 4 (A) The Venn graph of predicted essential genes in iAQY970 and iCC541 model.(B) Essential genes predicted by DEG and Geptop in symbiotic condition.(C) Essential reactions predicted in iAQY970 model under symbiotic condition.(D) Essential reactions predicted in iAQY970 model under freeliving condition. TABLE 1 General features of iAQY970 and compared with iCC541. a Blocked reactions exclude exchange reactions and transport reactions. TABLE 2 Comparison of symbiosis yield and production of energy and reducing factors. TABLE 3 Flux of objective reactions using proteome data. TABLE 5 Comparison of nitrogen fixation in four modules. TABLE 4 Reactions which can produce reductive ferredoxin. TABLE 6 Target genes overexpressed by MOMA analysis and the predicted metabolism situation.
v3-fos-license
2023-08-15T15:02:35.954Z
2023-08-01T00:00:00.000
260894486
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/176476/20230813-13419-808089.pdf", "pdf_hash": "ca30898815526c66e79a72cbcd917e874cd8cde4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41643", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7e2e30b61e65cf3fa2ce637b9daf8d7d827268f4", "year": 2023 }
pes2o/s2orc
COVID-19 Infection Among Healthcare Workers at a Tertiary Healthcare Hospital Purpose: SARS-CoV-2 or COVID-19 virus was the culprit of the global pandemic that began in 2019. With alarming mortality rates reaching sky-high worldwide, the virus prompted the masses to switch to online working. However, this was not feasible for healthcare workers (HCWs) exposed to a higher-than-normal risk of acquiring COVID-19 infection. This study aims to observe the prevalence of COVID-19 positivity among the various areas of a healthcare facility in Saudi Arabia. Methods: A cross-sectional study of positive employees among all departments at a tertiary care hospital in Riyadh, Saudi Arabia, such as administration, capital projects/facilities, and healthcare. The study included all hospital employees-permanent staff, rotating physicians, and trainees-who tested positive for COVID-19 between March 20, 2020 and December 30, 2020. Results: It was found that HCWs had the most significant number of infected individuals with nursing staff being the predominant demographic. This was followed by the capital projects/facilities departments, of which the environmental services staff were the most infected. Conclusion: It is pertinent that strict protocols be taken by hospital management to limit the spread of future infectious diseases within hospital settings. This includes the provision of personal protective equipment (PPE) and adequate education on its proper usage, alongside regular surveillance of staff with regard to adherence and early detection of symptoms. Introduction SARS-CoV-2 or COVID-19 virus belongs to a family of viruses known as coronavirus.With a varying onset and presentation of symptoms, including no symptoms at all, SARS-CoV-2 was responsible for the pandemic in 2019 [1].With a mortality rate ranging from 3% to 6%, the virus has especially shown disastrous effects on patients with underlying comorbidities.Patients admitted to the intensive care unit, the elderly, and the immunocompromised, to name a few, are especially prone to a more serious infection [2]. The rapidly spreading infection forced the ruling authorities of most countries to implement quarantine, isolation, and contact-tracing protocols [3].The main motivation for implementing these measures was to combat the contagious spread and prevent its destructive outcomes [4].While the majority of the work was detoured to the online platform, professionals in health care were exposed to higher than-usual working hours in the hospital.The exponential spread of infection required healthcare workers (HCWs) to upscale their working hours, posing serious threats to their mental and physical health.They were more susceptible to the infection depending on the varying degrees of exposure, with studies reporting between 3%-17% infection rate in HCWs [5].A staggering statistic reported in Italy showed over twelve thousand infected and over a hundred HCWs dead as a result of COVID-19.Even though these numbers were recorded as of April 2020, figures continued to rise [6]. The lack of proper personal protective equipment (PPE) supply or its adherence, among many other factors, rendered HCWs especially vulnerable to the virus [7].Our objective of this study is to determine the prevalence of COVID-19 infection among healthcare staff in a tertiary hospital in Saudi Arabia. Materials And Methods This is a cross-sectional study, that took place in King Faisal Specialist Hospital and Research Center (KFSHRC), Saudi Arabia.KFSHRC is a tertiary care hospital that offers a wide range of medical services, education, and research activities with a total of around 12,400 staff.We have defined HCWs in this study as all workers that work within the hospital setting as they are all potentially implicated, whether directly or indirectly, by hospital-acquired infections such as COVID-19.The majority of staff are working under healthcare delivery such as medical departments and patient flow management.The remaining group is mainly administrative workers concerned with jobs related to financing, IT services, recruiting, risk management, maintenance, food services, drivers, research, and training.In this study, staff was classified into three main categories according to their potential contact with patients-which includes; Health care, Capital project and facilities, and Administration. Included in the study were all hospital employees-permanent staff, rotating physicians, and trainees who tested positive for the COVID-19 polymerase chain reaction (PCR) from March 20, 2020 to December 30, 2020, whether during surveillance or based on clinical presentations. Since the pandemic started, all symptomatic employees were required to be off work for five days and do COVID-19 tests.Exposed staff were asked to fill out a contact tracing form that allocate them to a category of risk.If the intended staff was categorized as low risk, they need to be screened for COVID-19 after 24 hrs from last exposure.If they were categorized as High risk, they need to be off work for five days and screened for COVID-19 after five days from last exposure.The travel screening clinic also required employees returning from abroad to do COVID-19 tests on their return. HCW data was obtained from the Infection Control and Hospital Epidemiology Department and included age, sex, medical department, date of positive COVID-19 PCR, symptoms status, past medical history, and source of infection.Prevalence is reported using frequency distribution.The subjects belonged to various departments at the hospital.The majority were from healthcare divisions at 55% (Figure 1). FIGURE 1: Number of people testing positive for SARS-CoV-2 according to department Of the 973 individuals who tested positive and belonged to the healthcare division, the majority were employees in the nursing department (50%), while 38% were from the medical departments, and the remaining included pharmacy, admin, and patient flow staff (Figure 2). FIGURE 2: Healthcare department and SARS-CoV-2 positivity by division Out of the 1,763 subjects who tested positive, 30% were employed in capital projects and facilities (Figure 3). FIGURE 3: Capital projects and facilities department and SARS-CoV-2 positivity by division Of these, most of the individuals were part of environmental services (41%), maintenance (26%), and food services (16%).Executive officers who tested positive accounted for 6% of the study population, with guard forces comprising the majority (52%), while materials and supply chain management staff among others accounting for the rest (Figure 4). FIGURE 4: Administrative departments and SARS-CoV-2 positivity by division Other departments whose employees tested positive encompassed education and training, research, information technology, financial affairs, and administration. The source of infection acquirement was also assessed.For most of the study subjects, information regarding this was missing and for those available, hospital sources and community sources accounted for similar proportions. Discussion This study aimed to investigate the prevalence of COVID-19 infection among the HCWs of a tertiary care hospital.We found that medical professionals directly in contact with patients from the healthcare department were the most affected by the spread of COVID-19.Among this department, the nursing the medical sector were the most susceptible and were the victims of the spreading infection.This finding reflects the trend observed worldwide, including Italy, the United States, China, etc., with higher infection rates prevalent among direct providers of patient care as opposed to other departments in the hospital [5,[8][9][10]. Surprisingly, higher infection rates were also discovered among the personnel employed in the capital projects and facilities department.Environmental services, maintenance, and food services individuals were the highest to be affected in this division.This can be attributed to either exposure to undetected infection prevalent among different patients and coworkers or issues related to PPE.One of the major factors concerning PPE usage predisposing hospital staff to COVID-19 infection is the lack of adequate access to it at times of utmost need.Other factors have been reported as well, such as deficiency in the information received about the use of PPE, inability to apply proper techniques for the donning and doffing by the workers, and scarcity of COVID-19 testing kits during the initial phase of the pandemic [11,12]. The symptoms of patients infected with COVID-19 were also assessed by this study.Even though a majority of the care providers displayed symptoms of infection (51%), a large proportion of the study population was asymptomatic (45%).These individuals might have been unaware of the infection and possibly be a vital source of its spread.Serological testing of the individuals exposed to COVID-19 patients should have been more frequently utilized to identify these asymptomatic or subclinical infections in the healthcare setting [13]. Even though a majority of the study population were not able to identify the source of infection, of those who recognized, near equal distribution of infection acquired in hospital and community were reported.The higher prevalence of hospital-acquired COVID-19 infection is of particular concern.Several studies, including ours, have described and presented the remarkable COVID-19 infectivity rates among hospital staff despite several employed protective measures [14,15].In order to curb the spread of infection, strict protocols should be deployed by the hospital management with regard to the use of PPE, periodic surveillance of the hospital staff with proper diagnostic tools, and adequate working conditions for the providers are essential to combat the spread of infection. Conclusions In conclusion, this research study focused on the prevalence of COVID-19 infection among HCWs in a tertiary care hospital.The findings revealed that medical professionals within the healthcare department, particularly nursing and medical sectors, were the most affected by the spread of COVID-19, consistent with global trends observed in other countries.Higher infection rates were also identified among personnel employed in the capital projects and facilities department, indicating potential issues with PPE access and usage.The study highlighted the significance of proper PPE usage and training for HCWs, as well as the need for periodic surveillance and serological testing to identify asymptomatic carriers within the healthcare setting.Stricter protocols and adequate working conditions are crucial for reducing the spread of infection in hospitals.Addressing these factors will be essential to safeguard the health and well-being of healthcare providers and ultimately contribute to controlling the spread of COVID-19 in healthcare facilities. total of 1,763 individuals tested positive for SARS-CoV 2 from March 20, 2020 to December 30, 2020, with an age range of 29 to 49 years old and a mean age of 39.Males accounted for 60% of those who tested positive, while the rest were females.Patient demographics and characteristics are detailed in Table1.
v3-fos-license
2018-04-03T03:52:31.696Z
2006-12-01T00:00:00.000
24631547
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/281/48/37150.full.pdf", "pdf_hash": "3ebb6bd30e17d1d963432a0bfce2beb830e06cfb", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41646", "s2fieldsofstudy": [ "Biology" ], "sha1": "e1319c420840bb2887e087c1e6f980797c2084be", "year": 2006 }
pes2o/s2orc
Human T-cell Leukemia Virus Type I p30 Nuclear/Nucleolar Retention Is Mediated through Interactions with RNA and a Constituent of the 60 S Ribosomal Subunit* Human T-cell leukemia virus type I is the etiological agent of adult T-cell leukemia/lymphoma, an aggressive and fatal lymphoproliferative malignancy. The virus has evolved strategies to escape immune clearance by remaining latent in most infected cells in vivo. We demonstrated previously that virally encoded p30 protein is a potent post-transcriptional inhibitor of virus replication (Nicot, C., Dundr, M., Johnson, J. M., Fullen, J. R., Alonzo, N., Fukumoto, R., Princler, G. L., Derse, D., Misteli, T., and Franchini, G. (2004) Nat. Med. 10, 197-201). p30 is unable to shuttle out of the nucleus in heterokaryon assays, suggesting the existence of specific retention signals. Because suppression of virus replication relies on nuclear retention of the tax/rex mRNA by p30, determining the retention features of p30 will offer hints to break latency in infected cells and insights into new therapeutic approaches. In this study, we used live cell imaging technologies to study the kinetics of p30 and to delineate its retention signals and their function in virus replication. Notably, this is the first study to identify p30 nucleolar retention domains. Using mutants of p30 that localized in different cellular compartments, we show that post-transcriptional control of virus replication by p30 occurs in the nucleoplasm. We further demonstrate that p30 nuclear/nucleolar retention is dependent upon de novo RNA transcripts and interactions with components of the ribosomal machinery. Nucleoli are the main site of ribosome biogenesis, a highly complex process leading to the production of pre-ribosomal particles, which are then released into the nucleoplasm and exported to the cytoplasm as mature ribosomal subunits (1). In contrast to other cellular compartments, the nucleolus is not membrane-limited; its structure is maintained by a major accu-mulation of ribosomal rRNA and proteins such as nucleolin and protein B23. Most proteins are only transiently retained in the nucleolus through protein or RNA interactions, and proteins with long resident times usually harbor specific signals (2). Rather than being nucleolar localization signals (NoLS), 2 these signals tend to be nucleolar retention signals (NoRS). With very few exceptions (3), these NoRS are generally characterized by arginine-and lysine-rich sequences of highly variable sizes, ranging from five amino acids for angiogenin up to several tens for nucleolin (4,5). Often, amino acid sequence analysis fails to predict putative NoRS, especially when they overlap nuclear localization signals (NLS). The nucleolus, the site of transient sequestration and maturation of several cellular factors, is critically involved in the control of the cell cycle, DNA repair, aging, and mRNA export. Many viruses encode nucleolar proteins, which are involved in replication of viral genomes (6) as well as in transcriptional and post-transcriptional regulation of gene expression (7). We found recently that the viral p30 protein is a novel posttranscriptional repressor of human T-cell leukemia virus type I (HTLV-I) replication (8). p30 is a serine/arginine-rich nuclear/ nucleolar protein that complexes with and retains the viral tax/ rex mRNA in the nuclei of infected cells. Hence, decreased expression of the positive regulators Tax and Rex results in inhibition of virus replication (8). In this study, we investigated the mechanisms underlying p30 retention. Our results show that p30 utilizes multiple strategies and harbors two nucleolar retention domains. Consistent with its role in RNA trafficking, p30 nuclear/nucleolar retention is partially dependent on de novo mRNA transcription. We also report that p30 interacts with a nucleolar constituent of the large ribosomal subunit L18a, a protein shown previously (12,(32)(33)(34) to modulate internal initiation of translation. * This work was supported by NIAID Grant AI058944 from the National Insti-TCC TCT TCT AAG G-3Ј (forward) and 5Ј-C CTT AGA AGA GGA AAG CGC CGG CCG GCT GCG ACG-3Ј (reverse)). Single point mutants R91A, R93A, R94A, and R96A were generated by PCR using forward primer 5Ј-CCC AAG CTT CCA TGG CAC TAT GCT GTT TCG CC-3Ј and reverse primers 5Ј-CGG CCG GCT GCG ACG GGG CGC TCC AGG GGA GAA-3Ј, 5Ј-CGG CCG GCT GCG AGC GGG CCT TCC AGG-3Ј, 5Ј-CCG CGG CCG GCT GGC ACG GGG CCT TCC-3Ј, and 5Ј-GAG GAA AGC CGC GGC GCG CTG CGA CGG GG-3Ј, respectively. p30 -5RA and p30 -5RK were generated using double-stranded oligonucleotides containing the corresponding mutations and inserted between ApaI and SacII. The C termini of p30 -4RA and p30 -4RK were generated by PCR with reverse primer 5Ј-CCG AAT TCA GGT TCT CTG GGT GGG GAA GG-3Ј and forward primers 5Ј -C TCC TCG AGC AAG AAG TGC AAG TCA AAG TGC GTT TCC CCG CGA GGT GGC GC-3Ј and 5Ј-C TCC TCG AGC GCT GCC TGC GCC TCA GCA TGC GTT TCC CCG CGA GGT GGC GC-3Ј, respectively, using wild-type p30 as a template. The C termini of p30 -9RA and p30 -9RK were generated using the same primers and p30 -5RA and p30 -5RK, respectively, as a template. The N termini of p30 -4RA, p30 -4RK, p30 -9RA, and p30 -9RK were generated by PCR using forward primer 5Ј-CCC CAA GCT TCC ATG GCA CTA TGC TGT TTC GCC-3Ј and reverse primer 5Ј-CTT GCT CGA GGA GAA GAG GAA GCG AAA AAA AGA GCG-3Ј; digested with HindIII and XhoI; and cloned into pEGFP-C1. p30⌬73-78 was constructed by PCR using forward primer 5Ј-CA CGC TCG AGT TCC CCG CGA GGT GGC GC-3Ј with EcoRI and reverse primer 5Ј-CG GCT CGA GGA GAA GAG GAA GCG AAA AAA AGA GCG-3Ј with HindIII. p30⌬73-98 was generated by PCR with reverse primer 5ЈGGA CCG CGG GAA GCG AAA AAA AGA GCG-3Ј and HindIII, digested SacII and HindIII, and ligated with the SacII-EcoRI fragment of p30. p30⌬91-98 was generated by PCR with forward primer 5Ј-CGG GGC CCT CCA GGG G-3Ј and EcoRI , digested ApaI and EcoRI, ligated with the HindIII-ApaI fragment of p30. The appropriate mutation in each construct was verified by sequencing. Wild-type p30 and p30 -9RA were cloned in-frame with three hemagglutinin (HA) tags into the pMH vector. The HTLV-I long terminal repeat (LTR)-luciferase reporter construct, pBST, and pRL-TK have been described previously (8). Cell Culture, Transfections, and Fluorescence Microscopy-COS-7 cells were grown on cover slides in Dulbecco's modified minimal essential medium containing 10% fetal calf serum. Transient transfections were performed using Effectene transfection reagent (Qiagen Inc.) with 1 g of expression plasmids. Forty hours after transfection, cells were fixed in 4% paraformaldehyde and washed with phosphate-buffered saline. The slides were mounted, and images of p30 fused to green fluorescent protein (GFP) were captured using a Nikon EFD3 microscope (Boyce Scientific, St. Louis, MO) and a Nikon camera with a 100ϫ Eplan (160/0.17) objective. The imaging medium SlowFade was from Molecular Probes (Eugene, OR). The acquisition software Image-Pro Express Version 4 was from Media Cybernetics (Silver Spring, MD). The images presented in this study are representative of a large number of cells observed in three or more independent transfection experiments. For actinomycin D (AMD) and RNase A treatment, COS-7 cells were transfected with 1 g of GFP-p30 plasmid using Effectene transfection reagent. After 40 h of culture, cells were washed with 1ϫ phosphate-buffered saline and treated for 5 min with saponin (0.1 g/ml) at 4°C. After being washed, cells were treated with AMD (5 g/ml) for 30 min, washed, treated with RNase A (1 mg/ml) for 30 min, and fixed as described above. Dual-Luciferase Assay, in Vitro Transcription/Translation, and Western Blot Analysis-Firefly and Renilla luciferase activities were measured using the Dual-Luciferase reporter assay system (Promega Corp., Madison, WI) according to the manufacturer's instructions. After treatment and removal of the incubation medium, cells were washed and lysed by addition of 100 l of the lysis buffer provided with the kit. Ten microliters of cell lysate were added to 100 l of luciferase assay reagent II, and firefly luciferase activity was measured for 10 s. One-hundred microliters of Stop & Glo buffer, which contains the Renilla luciferase substrate and stops the firefly luciferase reaction, were added to the same tube, and Renilla luciferase activity was measured for 10 s in a Berthold junior luminometer. Activity was calculated as the ratio of the values of firefly luciferase to Renilla luciferase activities, and results expressed as -fold activation are representative of two independent sets of experiments. In vitro transcription/translation reactions were performed with pMH-p30-HA using rabbit reticulocyte lysates as reported previously (15). Immunoprecipitation was performed with antibody 12CA5 (Roche Diagnostics). Western blotting was performed using 40 g of protein lysates from transfected COS-7 cells as described in the figure legends and using anti-HA antibody 3F10 (Roche Diagnostics) according to the manufacturer's directions. Photobleaching-To probe the degree to which p30 or p30 mutant proteins are mobile within the nucleolus of a living cell, we carried out fluorescence recovery after photobleaching (FRAP) and inverse FRAP (iFRAP). For FRAP experiments, five single scans were acquired, followed by a single bleach pulse of 200 -500 ms using a spot 1 m in radius without scanning. Single section images were then collected at 1.6-s intervals. For imaging, the laser power was attenuated to 1% of the bleach intensity. For iFRAP, experiments were performed on a Zeiss LSM 510 confocal microscope with a 100ϫ/1.3 numerical aperture Plan Apochromat oil objective and 3ϫ zoom. GFP was excited with the 488-nm line of an argon laser, and GFP emission was monitored above 505 nm as described previously (9). Cells were maintained at 37°C with an ASI 400 air stream incubator (Nevtek). The whole nuclear area of transfected cells was bleached, except for a region of one nucleolus, using the 488-nm laser line at 100% laser power. Cells were monitored at 0.5-s intervals for 300 s. To minimize the effect of photobleaching due to imaging, images were collected at 0.2% laser intensity. For quantification, the loss of total fluorescence intensity in the unbleached region of interest was measured using Zeiss software. Background fluorescence was measured in a random field outside of the cells. For each time point, the relative loss of fluorescence intensity in the unbleached region of interest was calculated as follows: I rel ϭ (I t Ϫ BG)/(I o Ϫ BG)⅐(T t Ϫ BG), where I o is the background (BG)-corrected average intensity of the region of interest during pre-bleaching and T t is the background-corrected total fluorescence intensity of a neighboring control cell. Typical measurement errors in all experiments were ϳ15%. Mass Spectrometry-p30-HA was transcribed in vitro and translated from the T7 promoter of the pMH vector using the TNT quick coupled transcription/translation kit (Promega Corp.). p30-HA was immunoprecipitated overnight at 4°C using 1 l (5 g) of monoclonal antibody 12CA5. As a control, an equal amount of rabbit reticulocyte from the same kit was used and immunoprecipitated overnight at 4°C with 5 g of monoclonal antibody 12CA5. Twenty microliters of a 50% protein G-agarose slurry (Invitrogen) were added to both mixtures, followed by incubation at 4°C for 2 h. The immunoprecipitated complexes were washed three times with 1 ml of radioimmune precipitation assay buffer at 4°C. The components of the complexes were resolved on 4 -20% SDS-polyacrylamide gels (Bio-Rad) and detected by Coomassie Blue staining. The bands specific for the lane containing p30-HA translated in vitro (compared with the control lane) were cut out and analyzed by mass spectrometry. RESULTS p30 Is Strongly Retained in the Nucleolar Compartment of Living Cells-For some proteins, nucleolar localization is dependent upon the presence of an NoLS usually characterized by a short stretch of Arg or Lys residues. However, in most cases, nucleolar localization is mediated through a retention mechanism that increases the resident time of a given protein in this cellular compartment. HTLV-I p30 is a nuclear/nucleolar protein. To gain insights into its cellular compartmentalization, mobility, and in vivo kinetics, p30 was cloned in-frame with GFP into the pEGFP-C1 vector and subjected to live cell image analysis, FRAP, and iFRAP. In our FRAP experiments, a defined area of a living cell was bleached irreversibly by a single, highpowered spot laser pulse. The recovery of the fluorescence signal in the bleached area as the consequence of movement of the GFP fusion protein was recorded by sequential imaging scans. The kinetics of recovery are a measure of the mobility of the GFP-fused protein. Interestingly, the experiments showed a rapid and complete recovery of fluorescence in the nucleus (Fig. 1, A and B), indicating a high mobility of GFP-p30 in this cellular compartment. In contrast, a much slower recovery of fluorescence was consistently detected in the nucleolus (Fig. 1, A and B), suggesting that p30 may be specifically retained in the nucleolus. iFRAP as opposed to FRAP is the method of choice for measuring retention kinetics because it provides a relatively direct indication of a protein's residence time in a structure and because the measurement is independent of the size of the structure. In our iFRAP experiments, the entire nucleus with the exception of the region of interest containing the nucleolus was bleached using a pulsed laser. The loss of fluorescence signal in the nucleolus was then monitored by time-lapse microscopy. The rate of decay is a good approximation for the dissociation kinetics of the observed protein from the region of interest (9, 10). When iFRAP was applied to cells expressing GFP-p30, only a small fraction of the fluorescence signal was lost within the first 5 s after the bleaching event, followed by a slower decline of the nucleolar signal. After 300 s, a plateau representing the dissociation/association equilibrium of the unbleached population of molecules was reached. Although the rapid initial loss likely reflects the fraction of loosely bound or diffusing GFP-p30, the slower loss of the majority of protein indicates that GFP-p30 was strongly retained in the nucleolus (Fig. 1C). Typical measurement errors in all experiments were Ͻ15%. We then compared the FRAP and iFRAP kinetics of GFP-p30 and nucleolar methyltransferase fibrillarin-GFP. We found that both proteins were strongly retained in nucleoli. The results indicated that p30 exhibited similar albeit slightly faster recovery kinetics compared with fibrillarin ( Fig. 1, B and C). With these interesting observations, we next sought to identify the NoLS of p30. p30 Contains Two Independent Nucleolar Localization/Retention Signals-We generated deletion mutants fused to GFP and analyzed their cellular distribution upon transfection into COS-7 cells ( Fig. 2A). Serial deletion from the C terminus of p30 showed that p30⌬C200 was mostly nuclear but was excluded from the nucleoli and that p30⌬C250 efficiently accumulated in the nucleoli (Fig. 2B). These results suggest the existence of a nucleolar localization/retention signal located between nucleotides 200 and 250. We next performed serial deletions from the N terminus of p30 (Fig. 3A). The results showed that GFP-p30⌬N100, GFP-p30⌬N200, and GFP-p30⌬N280 localized to the nucleoli of transfected cells (Fig. 3B). However, when deletions beyond nucleotide 300 were performed, the resulting fusion proteins, GFP-p30⌬N300, GFP-p30⌬N400, and GFP-p30⌬N500, no longer accumulated in the nucleoli, but were distributed diffusely in the nucleoplasm (Fig. 3B). These results suggest the presence of a second nucleolar localization/retention signal located between nucleotides 280 and 300. Although the abovementioned domains were reported previously as NLS (11), our results clearly indicate that the sequences are in fact required for nucleolar targeting of p30. This finding is not unusual, as NoLS are often part of NLS. Our results further demonstrate the presence of two previously unidentified NLS in the p30 protein, one in the N terminus between nucleotides 1 and 200 and the other in the C terminus between nucleotides 500 and 750. Both GFP-p30⌬C200 and GFP-p30⌬N500 localized to the nucleus (Fig. 3B), in contrast to the diffuse pattern throughout the cell observed for the insertless pEGFP-C1 vector (Fig. 2B). This finding was also confirmed by the fact that in-frame deletion of the previously identified NLS did not alter the nuclear localization of p30, but rather abolished its nucleolar accumulation (see Fig. 6). In contrast to previous findings, our results indicate that the major role of these arginine-rich sequences in p30 is to act as nucleolar localization/retention signals. In Vivo Dissociation Kinetics Reveal Two NoRS-Dissociation kinetics measurements by FRAP and iFRAP of live cells can discriminate between an NoLS and an NoRS. To understand the nucleolar retention mechanism of p30 described above (Fig. 1), we used the truncation mutants GFP-p30⌬N280 and GFP-p30⌬C250, with each mutant having only one of the nucleolar localization/retention signals. We measured the dissociation kinetics of GFP-p30, GFP-p30⌬N280, and GFP-p30⌬C250 from the nucleoli. Interestingly, both p30 deletion mutants A, mobility of GFP-p30 in the nucleoplasm and nucleoli as determined by FRAP. Cells expressing GFP-p30 were imaged before and after photobleaching of a spot of the same size in the nucleoplasm and nucleolus. The recovery of the fluorescence signal was monitored by time-lapse microscopy. The bleached area is indicated by arrows in the pseudocolored panels. Shown are the results from quantification of the recovery kinetics. GFP-p30 recovered rapidly in the nucleoplasm within 25 s in contrast to a significantly slower recovery in the nucleolus. Averages from at least 15 cells are shown. B, mobility of GFP-p30 compared with that of fibrillarin-GFP in the nucleolus. Shown are the results from quantification of the recovery kinetics. GFP-p30 exhibited slightly faster recovery kinetics compared with fibrillarin-GFP. C, iFRAP of GFP-p30 and fibrillarin-GFP in the nucleolus. Cells expressing GFP-p30 and fibrillarin-GFP were imaged before and after photobleaching of the entire nucleus except one nucleolus. The loss of fluorescence signal from the unbleached nucleolus was monitored by time-lapse microscopy. In agreement with FRAP data, GFP-p30 exhibited faster dissociation kinetics from the nucleolus compared with fibrillarin-GFP. Averages from at least 20 cells are shown. exhibited distinct dissociation kinetics compared with wildtype GFP-p30. The dissociation kinetics of GFP-p30⌬N280 and GFP-p30⌬C250 from the nucleoli were significantly faster than those of wild-type p30 (Fig. 4B). These data indicate that the two mutants are retained in the nucleoli for a significantly shorter period of time compared with the wild-type p30 protein and suggest that these signals may act as NoRS. To confirm these results and to identify potential amino acid residues critical for NoRS function, each Arg residue present within the two Arg-rich domains was mutated to Ala by sitedirected mutagenesis and cloned in fusion with GFP (Fig. 5A). The cellular distribution of these GFP-fused single point mutants was tested upon transfection of COS-7 cells. As shown in Fig. 5B, the single point mutations appeared to be innocuous, as most mutants displayed no substantial change in localization. Our results are similar to those reported previously for herpes simplex virus type 1 US11 protein (3) and angiogenin (13), for which several amino acid substitutions within the NoRS are necessary to abolish nucleolar localization. To confirm the function of each Arg-rich domain of p30, we made internal deletions or multiple Arg replacements. When in-frame deletion of both Arg-rich domains was performed, the resulting GFP-p30⌬73-98 mutant was localized in the nucleoplasm, but excluded from the nucleoli (Fig. 6A). Consistent with the results presented in Figs. 2 and 3, these findings confirmed that the Arg-rich domains are required for efficient nucleolar accumulation of p30 and suggested the existence of additional NLS within the p30 protein. Each nucleolar localization domain was then deleted separately. Although GFP-p30⌬73-78 became nuclear, excluded from the nucleoli, the cellular distribution of GFP-p30⌬91-98 remained unaffected (Fig. 6A). The results presented here are representative of several transfection experiments, although an occasional nucleolar signal was detected (Ͻ10%) for GFP-p30⌬73-98 and GFP-p30⌬73-78 when expressed at high levels (data not shown). Taken together, these results showed that both domains could serve as NoLS (Fig. 6A). Although GFP-p30⌬N280 lacked the first Arg-rich domain, it localized to the nucleolus when GFP was fused at the C terminus. We believe that this resulted from the folding and accessibility of the second NoLS because placing GFP in the N terminus prevented nucleolar accumulation and resulted in a nuclear pattern similar to that of GFP-p30⌬73-98 (data not shown). We next confirmed the results obtained with in-frame deletion mutants by mutagenesis replacement of all Arg residues with Ala in the first, second, or both Arg-rich domains: GFP-p30 -4RA, GFP-p30 -5RA, and GFP-p30 -9RA, respectively (Fig. 6, B and C). We also substituted Arg with Lys for amino acid charge conservation. As expected, Arg-to-Ala substitutions yielded results similar to those reported in Fig. 5, and both GFP-p30 -4RA and GFP-p30 -9RA were excluded from the nucleoli (Fig. 6, B and C). In contrast, all mutants with Arg-to-Lys substitutions localized similarly to the wild-type protein in the nucleus/nucleolus (Fig. 6C). Quantitative image analysis revealed no significant differences in nuclear/nucleolar localization between p30 -4RK, p30 -5RK, and p30 -9RK (data not shown). Nucleolar Localization of p30 Is Dispensable for Its Post-transcriptional Repression of Virus Expression-We found previously that p30 is a post-transcriptional negative regulator of viral gene expression (8). This function was related to the ability of p30 to sequester the tax/rex mRNA, encoding positive regulators, in the nuclear compartment. Other nucleolar resident proteins such as La are involved in the protection of RNAs from 3Ј-exonucleolytic digestion and nuclear retention (14). Many viruses have evolved proteins that localize to the nucleolus to increase or suppress virus replication (6). Because the p30 - 9RA mutant was excluded from the nucleoli, it offered a unique opportunity to test whether p30-mediated inhibition of virus expression occurs in the nucleus or nucleolus. The HTLV-I molecular clone pBST (16) was expressed in 293T cells along with the HTLV LTR-luciferase reporter construct and p30 or p30 -9RA. In this assay, Tax produced from the molecular clone transactivated the viral LTR fused to the luciferase gene. Hence, luciferase reduction is directly correlated with p30-mediated inhibition of Tax expression, and luciferase is measured as a surrogate indicator of p30-mediated repression (8). Western blot analysis indicated that p30 -9RA migrated a little faster than the wild-type protein presumably as a result of arginine substitution. Of note, p30 -9RA was expressed at lower levels compared with wild-type p30. However, despite the lack of nucleolar localization, p30 -9RA appeared to be as efficient as wildtype p30 in post-transcriptional inhibition of HTLV-I expression (Fig. 7A). In fact, when p30 -9RA and wild-type p30 were normalized for equal expression, no significant difference in post-transcriptional inhibition was found (Fig. 7E). These results strongly suggest that p30-mediated post-transcriptional repression of HTLV-I occurs in the nucleoplasm. We also tested the transcriptional effect of p30 on the viral LTR. Our data indicated no significant difference between wild-type p30 and p30 -9RA, with a 2-fold or less effect on the viral LTR (Fig. 7, B and D). Nucleolar Retention of p30 Is Partially Transcription-dependent-We next tested whether the GFP-p30 fusion protein localized to the dense fibrillar components (DFC), which are sites of transient accumulation of both elongating and full-length primary transcripts released from the rDNA template, or to the granular component (GC), which is the site of pre-ribosome assembly. To this end, fibrillarin and p19 ARF fused to GFP were used as specific markers for DFC and GC, respectively. Fig. 8 shows that p30 localized mainly within the GC compartment. Because the accumulation of several proteins in nucleoli is transcription-dependent, we determined whether nuclear import and accumulation of p30 in nucleoli are modified by treatment with AMD. The transcriptional inhibition assay is based on the observation that many shuttling proteins accumulate in the cytoplasm when transcription is inhibited (17). COS-7 cells were incubated for 3 h with either 0.05 or 5 g/ml AMD to inhibit the activity of either RNA polymerase I alone or both RNA polymerases I and II, respectively. The effectiveness of AMD treatment was assessed by reduction in the size and shape of the nucleoli. The localization of GFP-p30 and nucleolin-GFP was not significantly affected at 0.05 g/ml AMD, suggesting that p30 is not associated with rRNA. However, both proteins appeared to localize more in the nucleoplasm at the higher concentration of AMD (Fig. 9A), suggesting that their nucleolar retention is partly dependent on RNA polymerase II and III transcription. In the same experiment, human immunodeficiency virus Rev-GFP was used as a positive control and was efficiently relocated to the cytoplasm following AMD treat- . FRAP and iFRAP analyses to determine the mobility and dissociation kinetics of the p30 protein compared with those of the p30⌬C250 and p30⌬N280 mutants. A, mobility of p30, p30⌬C250 (dC250), and p30⌬N280 (dN280) in nucleoli as determined by FRAP. Cells were imaged before and after photobleaching inside of the nucleolus. The recovery of the fluorescence signal was monitored by time-lapse microscopy. Shown are the results from quantification of the recovery kinetics to the initial pre-bleached fluorescence. Averages from at least 20 cells are shown. B, iFRAP of p30, p30⌬C250, and p30⌬N280. Cells were imaged before and after photobleaching of the entire nucleus with the exception of one nucleolus. The loss of fluorescence signal in the nucleolus was monitored by time-lapse microscopy. The p30⌬C250 and p30⌬N280 mutants exhibited similar dissociation kinetics from the nucleoli, which were significantly faster than those for wild-type p30. Averages from at least 20 cells are shown. ment at either concentration, as reported previously (18,19). We next used live cell imaging to investigate GFP-p30 recovery kinetics in response to AMD treatment. FRAP results indicated a much slower recovery of fluorescence following AMD treatment (Fig. 9B), confirming that GFP-p30 is in fact retained in the nucleoli in a transcription-dependent manner via interaction with less mobile nucleolar components. This was supported by the calculated t1 ⁄ 2 for GFP-p30, which was ϳ9 s, compared with that for GFP-p30 in cells treated with actinomycin D, which was 77 s. To further demonstrate a role of p30-RNA interactions in the nucleolar retention of p30, we treated transfected cells with RNase A and AMD. Under these experimental conditions, significant amounts of GFP-p30 re-localized to the cytoplasm. We calculated the total fluorescence intensities of the nuclear and cytoplasmic regions in both cells on the presented image. The average total fluorescence intensities of the nuclear and cytoplasmic regions were almost similar, with 56% of the total GFP signal in the nuclear compartment and 44% in the cytoplasmic compartment (Fig. 9C), thereby demonstrating a significant role of p30-RNA associations in the nucleolar retention of p30. Because GFP-p30 was not completely redistributed to the nucleoplasm subsequent to AMD and RNase A treatment, we hypothesized that, in addition to RNA interactions, p30 retention may also be dependent on interactions with less mobile nucleolar resident proteins. p30 Interacts with the Large Ribosomal Subunit L18a-To identify potential p30-interacting partners, we immunoprecipitated p30-HA transcribed in vitro and analyzed bound proteins by mass spectrometry. Among the various proteins that immunoprecipitated with p30, our attention was drawn to the nucleolar L18a protein (Fig. 10A), a constituent of the 60 S ribosomal subunit (20). L18a was cloned in-frame with a myc epitope to allow detection. Two other constituents of the 60 S ribosomal subunit, L21a and L23a, were also cloned and used as controls. To confirm the presence of p30 and L18a in a protein complex, p30-HA was transfected into 293T cells and immunoprecipitated with anti-HA antibody, and co-immunoprecipitated proteins were analyzed using anti-Myc antibody. Under these experimental conditions, we confirmed that p30 directly or indirectly interacted with L18a. However, no complexes were detected for p30 and ribosomal proteins L21a and L23a (Fig. 10B) (data not shown), suggesting that protein complex formation between p30 and L18a is not simply due to overexpression of these proteins. Numerous viral proteins have been shown (12,(32)(33)(34) to associate with nucleolin, which is the major constituent of the nucleolus. When we transfected a nucleolin expression vector along with p30, we did not detect any interactions between the two proteins (data not shown), again suggesting that protein complexes formed between p30 and L18a are specific. HTLV-I is latent in most infected cells in vivo. This feature allows the virus to be amplified with each cellular replication of infected cells while avoiding immune recognition. The notion that HTLV-I is latent in adult T-cell leukemic cells is supported by numerous studies. The expression of viral genes cannot be detected in vivo, but the provirus is rapidly expressed once the cells are cultured ex vivo. Adult T-cell leukemia is characterized by the monoclonal expansion of infected cells, which cannot be explained by a continual de novo infection and immune clearance balance of infected cells. The same proviral integration site can be found several years apart in infected patients (21). It was reported recently that tenofovir administered 1 week after infection of NOD/SCID mice does not reduce the proviral loads, indicating that, after initial infection, clonal proliferation of infected cells is predominant over de novo infection of previously uninfected cells (22). HTLV-I has evolved multiple strategies to prevent apoptosis and to extend the life span of infected cells (23)(24)(25)(26)(27). In turn, this allows accumulation of genetic mutations and disease progression. Because HTLV-I is very immunogenic and has a low variability, reducing its expression is key in virus maintenance in vivo. We reported previously that p30 is able to potently suppress virus expression and thus may be involved in silencing in vivo (8). The mechanism by which p30 prevents HTLV-I expression is not fully resolved, but it may result from inhibition of Tax-mediated LTR activation (28) along with interaction and nuclear retention of the tax/rex mRNA (8). Therefore, mechanisms of p30 nuclear/nucleolar retention are essential to its suppressive function. In this study, we have identified two arginine-rich domains (located between amino acids 73 and 78 and amino acids 91 and 98) essential for HTLV-I p30 nucleolar localization/retention. In addition, our results revealed the existence of two previously unidentified NLS in the N-and C-terminal regions of the protein. These NLS were responsible for the nuclear localization of p30⌬C2 and p30⌬N5. The mRNA encoding p30 also encodes p13 in the same open reading frame, and the p13 amino acid sequence corresponds to the C terminus of p30. There is a discrepancy in the published literature concerning the mitochon- drial versus nuclear localization of p13 in transfected cells (29,30). The p30⌬N5 protein described in this study corresponds to the p13 protein with the mitochondrial targeting signal deleted. The localization of this mutant in the nuclear compartment reveals the presence of an NLS, which may be used under specific conditions when the mitochondrial targeting signal is masked. Multiple replacement of arginine with alanine in the NoRS allowed us to generate a nuclear p30 mutant (p30 -9RA) excluded from the nucleoli. We then tested whether post-transcriptional inhibition of HTLV-I expression is dependent on the nucleolar localization of p30. Our data demonstrate that post-transcriptional repression occurs in the nucleoplasm because no difference was found between wild-type p30 and the p30 -9RA mutant. The nucleolus is morphologically separated into three distinct components, which reflect the vectorial process of ribosome biogenesis. Fibrillar centers are surrounded by dense fibrillar components (DFC), and GC radiate out from the DFC (31). The observations that HTLV-I p30 is localized mainly to the GC (Fig. 8) and that p30 associates with viral RNA complexes prompted us to investigate whether p30 interaction with RNA and/or proteins may be involved in its nucleolar retention. Our results clearly indicate that p30 retention is associated with RNA polymerase II-and III-dependent transcription of nascent RNA inasmuch as treatment with AMD and RNase released a significant fraction of p30, which diffused to the cytoplasm. However, the existence of additional retention mechanisms was also suggested by the fact that, under these experimental conditions, only about half of GFP-p30 remained localized in the nucleus/nucleolus (56%). It is unclear however how RNase treatment may alter the nuclear export machinery. Many viral factors interact with nucleolar proteins. To identify p30 partners, we performed mass spectrometry. The results revealed the ribosomal protein L18a, a nucleolar protein, as a putative p30 partner. Transient transfection assays confirmed in vivo interactions between p30 and L18a, whereas, under the same experimental conditions, no binding was found between p30 and L21a, L23a, or nucleolin. The data suggest that p30 binding to L18a is specific. We cannot conclude whether these interactions are direct or indirect in absence of purified p30 protein. Our data are consistent with the fact that L18a is part of the pre-ribosomal subunit, which assembles in the GC compartment. Several studies have shown that the ribosomal protein L18a interacts with viral protein and eukaryotic initiation factor-3 and facilitates internal re-initiation of translation in cytomegalovirus and hepatitis C virus (12,(32)(33)(34). Our results indicate that this phenomenon is conserved in HTLV-I. Although p30 is a non-shuttling protein strongly associated with the nucleoli, it is possible that a fraction of loosely bound protein may diffuse to the cytoplasm or may be redistributed during the cell cycle and breakdown of nucleoli, allowing additional cytoplasmic functions. Interestingly, the p30-encoding mRNA completely overlaps two other viral gene open reading frames, viz. p12 and p13. Whether p30 expression may directly or indirectly modulate internal initiation to increase expression of p12 and/or p13 warrants further studies.
v3-fos-license
2021-09-28T18:17:26.338Z
2021-07-05T00:00:00.000
243543504
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://figshare.com/articles/journal_contribution/First_report_on_the_presence_of_aflatoxins_in_fig_seed_oil_and_the_efficacy_of_adsorbents_in_reducing_aflatoxin_levels_in_aqueous_and_oily_media/14910624/1/files/28713792.pdf", "pdf_hash": "940c6932fba4dbced2021ee9c373350682448294", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41647", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "6ff1d44106ed3d521b74920b495f0b0749d14f76", "year": 2022 }
pes2o/s2orc
First report on the presence of aflatoxins in fig seed oil and the efficacy of adsorbents in reducing aflatoxin levels in aqueous and oily media Abstract Aflatoxin contamination of dried figs has been a chronic problem for decades but aflatoxin distribution within the fruit has not yet been revealed. In this study, we conducted aflatoxin analyses separately in the seedless part of dried figs and in fig seed oil. The results showed that both seedless part and seed oil were contaminated with aflatoxins at levels close to the regulatory limits set for products for direct consumption. Effectiveness of various adsorbents in removing aflatoxins from aqueous and oily compartments of the fruit and the effects of these treatments on bioactive compounds and physicochemical characteristics were also investigated. Introduction Mycotoxins are metabolites of toxigenic mold species and aflatoxins are the most known group of mycotoxins due to their mutagenic, teratogenic and carcinogenic effects. The International Agency for Research on Cancer (IARC) classified aflatoxins as "Group 1, carcinogenic to humans" based on experimental evidences (IARC 1993). Various types of food and feed were reported to be contaminated with aflatoxins. Among these, dried figs draw attention with both high incidence of contamination and high contamination levels (RASFF 2018(RASFF , 2019. In fig processing plants, dried figs coming from the orchards are examined under ultraviolet light and figs showing bright greenish yellow fluorescence are removed. Fluorescent figs are regarded as "contaminated fruits" due to the relationship between the fluorescence and aflatoxin contamination (Steiner et al. 1988). It was shown that aflatoxin content was effectively lowered after removing fluorescent figs from the batch. Aflatoxin-contaminated figs not only raise health concerns among consumers but also cause economic problems and environmental hazards. It would be very useful if the aflatoxins could be removed from the product without forming any toxic compounds originated from the toxins themselves. Then, the decontaminated figs or maybe some parts of the fruits could be used by the industry and various problems stemmed from the contaminated products would be solved. Of course, prevention of mold contamination and toxin production is the best solution to aflatoxin problem (Luo et al. 2018). However, it is not that easy in practice due to moderate temperature, high humidity, sudden rains, etc. and developing decontamination strategies are inevitable. Among these, adsorption is regarded as superior to other processes such as heat treatment, oxidation etc. since the former does not change the chemical structure of the toxin and does not cause the formation of toxic by-and degradationproducts (Olopade et al. 2019). Adsorbents are currently used in a wide range of applications including pollution control, purification, separation, and others across a large number of industrial sectors (Jenkins 2015). Moreover, the effectiveness of the adsorbents for reducing various mycotoxins from different food products have been examined in scientific studies (Var et al. 2008, Liu et al. 2021, Muaz and Riaz 2021 and successful results have been obtained. On the other hand, causing loss of some bioactive components contributing to nutritional value of the product was reported as the main disadvantage of adsorption process (Gokmen et al. 2001). A number of authors investigated different techniques to decontaminate dried figs (Altug et al. 1990, Zorlugenc et al. 2008, Karaca and Nas 2009; however, to our knowledge, there is no information regarding the employment of adsorption for aflatoxin removal from figs. Recently, a new product "fig seed oil", mainly produced from low quality figs has become very popular. It can be consumed as a food additive or a phytochemical product or used for pharmaceutical and cosmetic purposes (Icyer et al. 2017). Several studies concerning the composition and bioactive components of fig seed oil have been conducted in the last few years (Duman and Yazici 2018, Guven et al. 2019, Hssaini et al. 2020. However, although fig fruit is quite susceptible to mold contamination and toxin production, aflatoxin contamination in fig seed oil has not yet been examined. Therefore, the lack of scientific information on contamination level of fig seed oil and on the effectiveness of widely used adsorbents on reducing aflatoxin levels in seedless part and seed oil of dried figs as well as on the impact of this treatment on important characteristics were our justifications for conducting the present research. The aim of this study was to determine the aflatoxin contamination levels separately in the seedless Materials The study was carried out on dried figs of Sarilop cultivar grown in the Aegean Region (Aydın province) of Turkey. Dried fig samples showing bright greenish-yellow fluorescence when examined under UV light were used in the experiments. These figs were discarded from the lot in fig processing plants, collected by Aegean Exporters' Association in November 2018 and kindly supplied to us. Activated carbon (Akticol FA), polyvinylpyrrolidone (PVP, granular) and bentonite (Aktivit) were obtained from Erbsloeh (Geisenheim, Germany). Marl (containing approximately 90% calcium carbonate) and gelatin (gel strength 300, Type A) were purchased from Odemis Pazari (Izmir, Turkey) and Sigma-Aldrich (Steinheim, Germany), respectively. A mixed standard solution of aflatoxins (containing 1.00, 0.29, 0.99 and 0.27 mg of aflatoxins B 1 , B 2 , G 1 and G 2 , respectively, in 1 ml of methanol) was obtained from Supelco (Bellefonte, PA, USA). Other chemicals were at least reagent grade and purchased from Merck (Darmstadt, Germany). was transferred to a Waring blender jar and 150 ml of distilled water was added. After homogenization for 1 min, the content of the jar was poured onto a coarse filter paper placed on top of a funnel. Fig seeds on the filter paper were taken with a tweezer and transferred to a petri dish. Seeds were washed under running tap water to remove any flesh residues and transferred onto a clean coarse filter paper to dry at room temperature for 24 h. Dried fig seeds were used for oil extraction while the seedless part of the fruit was directly subjected to the aflatoxin analysis. Oil extraction from fig seeds About 25 g of fig seeds and 100 ml of n-hexane were homogenized in a plastic beaker at 10 000 rpm for 1 min using a homogenizer (IKA T18 Ultra-Turrax, Staufen, Germany). The beaker content was shaken at 120 rpm for 2 h on an orbital shaker (OS-20, Boeco, Hamburg, Germany) and then filtered through a Whatman filter paper. Seed residues on the filter paper were re-extracted with another portion of nhexane following the steps explained above. The filtrates were combined and dried over anhydrous sodium carbonate. The hexane was evaporated using a rotary evaporator (Scilogex RE100-Pro, Rocky Hill, CT, USA) and the residue (fig seed oil) in the rotary bottle was transferred to an amber vial for further analyses. Preparation of aqueous fig extracts Preliminary experiments were conducted to determine the dilution ratio between dried figs and water to be used in the preparation of the extracts. Accordingly, 100 g of dried figs and 175 g of distilled water were blended in Waring blender. This ratio resulted in a brix value of 22 ± 2 in the extract which is very close to that of fresh fig fruit (Aytekin Polat and Caliskan 2008). The homogenizate was filtered through a filter paper and this filtrate was used as the aqueous extracts of dried figs. Aflatoxin analysis Aflatoxin analysis was consisted of an extraction procedure and a quantification step performed by high performance liquid chromatography (HPLC). Aflatoxins were extracted from the fruit samples according to the method of Stroka et al. (2000) and a procedure suggested by Bao et al. (2013) was used for oil samples. Both methods comprise a homogenization (with methanol-water mixture) and an immunoaffinity cleanup steps. The extracts were injected to an HPLC apparatus (Shimadzu LC-10AD, Kyoto, Japan) equipped with a fluorescence detector (Shimadzu RF-20A, Kyoto, Japan). The separation of the analytes was performed on an ODS 2 Hypersil (3 mm, 150  4.6 mm I.D., Thermo Scientific, Waltham, MA, USA) analytical column maintained at 25 C. The excitation and emission wavelengths of fluorescence detection were at 362 and 455 nm, respectively. A post-column derivatization was performed using an electrochemical derivatization apparatus (Coring System Diagnostix GmbH, Gernsheim, Germany). The mobile phase was a mixture of methanol and water (40:60, v/v) containing 216 mg potassium bromide and 636 mL 4 M nitric acid at a flow rate of 1 ml/min. A series of standard solutions (with concentrations of 5-250, 1.5-72.5, 5-247.5 and 1.4-67.5 mg/L aflatoxins B 1 , B 2 , G 1 and G 2 , respectively) were prepared from the standard solution of aflatoxins by appropriate dilutions in methanol. These solutions were used to prepare calibration curves and aflatoxins in the samples were quantified using these curves. Calibration curves with low aflatoxin concentrations (0.05-0.5, 0.03-0.3, 0.03-0.3 and 0.02-0.2 mg/L aflatoxins B 1 , B 2 , G 1 and G 2 , respectively) were also drawn to determine limit of detection (LOD) and limit of quantification (LOQ) values. LOD and LOQ values were calculated from these curves by multiplying the standard deviation of the response by 3.3 and 10, respectively, and dividing by the slope of the calibration curve. LOD values were determined as 0.03, 0.02, 0.12 and 0.05 mg/L and LOQ values were determined as 0.10, 0.05, 0.47 and 0.16 mg/L for aflatoxins B 1 , B 2 , G 1 and G 2 , respectively. Recovery tests were conducted by spiking the samples with aflatoxins standard solutions to obtain a known final concentration (1 ng/g) of aflatoxin B 1 . After letting the sample for 15 min at room temperature, the extraction and the injection procedures were done as explained above. Triple injections were conducted and the mean recovery values were calculated as 87, 60, 120 and 85% for aflatoxins B 1 , B 2 , G 1 and G 2 , respectively. Adsorbent treatments Contaminated fig extracts and fig seed oil samples were treated with activated carbon, bentonite, marl, polyvinylpyrrolidone and gelatin in order to determine the effectiveness of these adsorbents in reducing the levels of aflatoxins. Before treatments, aflatoxin B 1 and total aflatoxins levels of contaminated dried figs extracts were 71.46 ± 6.09 and 143.66 ± 10.40 mg/kg, respectively. These values were 11.15 ± 0.56 and 43.55 ± 1.84 mg/L, respectively, for contaminated fig seed oil samples. A portion of contaminated sample (40 g of the extract or 7.5 g of the oil) was put into a polypropylene centrifuge tube and the required amount of one of the adsorbents corresponding to 0.1-5.0% of the sample was added. The tube was shaken at 200 rpm for 2 h on the orbital shaker and then centrifuged at þ4 C for 10 min at 7000 rpm (NF 800 R, Nuve, Ankara, Turkey). The supernatant was filtered through a microfilter (0.45 mm) and then injected to the HPLC. The adsorbent doses and treatment time chosen in the study was based on the data in the literature (Bueno et al. 2005, Carraro et al. 2014, Carrasco-Sanchez et al. 2017. During treatments, a gelling problem was observed in case of direct addition of gelatin. For this reason, a 12.5% solution of gelatin was prepared in water at around 50 C. The required amount of this solution giving a gelatin concentration of 0.1-1.0% in the final solution was added. Co., Ltd., Shanghai, China) and a digital pH-meter (Hanna Instruments, 2211 pH/ORP meter, Woonsocket, RI, USA) were used for Brix and pH measurements, respectively. Titratable acidity was determined by titrating the sample with sodium hydroxide solution (0.1 mol/l) using phenolphthalein as an indicator. Effects of the adsorbents on the antioxidant activities and total phenolic contents of dried fig extracts and fig seed oils were determined. Twenty-five mL of methanol:water mixture (80:20, v/v) and a certain amount of sample (2.625 ml of extract or 2.5 g of oil) were transferred to a polypropylene tube. The tube was shaken at 200 rpm on the orbital shaker for 1 h and centrifuged at 7000 rpm for 10 min at 4 C. Then, the supernatant was filtered through a Whatman filter paper into an amber vial and this filtrate was used for both antioxidant activity and total phenolic content determination. Antioxidant activity of the samples was determined using the method described by Brand-Williams et al. (1995). Two and a half mg of 2,2diphenyl-1-picrylhydrazyl (DPPH) were dissolved in 100 ml of methanol and the absorbance of this solution at 515 nm was approximately 0.800. The required amount of this solution was mixed with the sample (300 mL extract or 150 mL oil) to get a final volume of 3 ml. This mixture was vortexed for 2 min and held for 1 h in the dark. The absorbance of the mixture was measured at 515 nm using a spectrophotometer (UV-1201, Shimadzu, Kyoto, Japan) and the decrease in absorbance was calculated according to the initial absorbance of the DPPH solution. Trolox was used as standard and the antioxidant activity was expressed as mmol trolox equivalent per g extract or mL oil. Stock (0.5%, w/v) and standard solutions (0-25 mM) of trolox were prepared in methanol. Standard solutions were treated with DPPH solution as described above, a calibration curve was constructed and the antioxidant activities of the samples were determined using this curve. Total phenolic content of the samples was determined according to the method of Singleton et al. (1999). Two hundred microliters of the sample (extract or oil) was put into a test tube and 1500 mL of distilled water and 100 mL of Folin-Ciocalteu reagent were added. The tube content was vortexed for 2 min and held for 3 min at ambient temperature. After adding 1200 mL of sodium carbonate solution (7.5%, w/v) and holding tubes for 2 h in the dark, the absorbance was measured at 765 nm. Gallic acid was used as standard and total phenolic content was expressed as g gallic acid equivalent (GAE) per kg extract or oil. Stock (0.05%, w/v) and standard solutions (0-100 mg/ L) of gallic acid were prepared in distilled water. Standard solutions were treated with Folin-Ciocalteu and sodium carbonate solution as described above, a calibration curve was constructed and the phenolic contents of the samples were determined using this curve. The absorbance spectra of the untreated oil sample were taken over the range of 360-800 nm using the spectrophotometer and the absorption maxima were determined as 459.5 ve 435.5 nm. The absorbance values of all treated oil samples were measured at these wavelengths and thus the effect of adsorbent treatments on oil color was determined. Tocopherols were determined according to the AOCS Official Method Ce 8-89 (AOCS 2009) with a slight modification. Briefly, 0.25 g of fig seed oil sample was diluted with 1 ml of 2-propanol, filtered through a microfilter with a pore size of 0.45 mm (Chromafil Xtra PTFE-45/25, Macherey-Nagel, Duren, Germany) and injected to the HPLC. Analytical column was Zorbax Eclipse XDB-C18 column (Agilent Technologies, 250 mm  4.6 mm I.D., 5 lm particle diameter, Santa Clara, USA), column temperature was 25 C and the injection volume was 20 mL. The mobile phase was HPLC grade methanol with a flow rate of 1 ml/min. Detection was carried out at 289 nm for alpha-tocopherol and 297 nm for delta-tocopherol and gamma-tocopherol using a photodiode array detector (Shimadzu SPD-M20A, Kyoto, Japan). Standard solutions of alpha-, delta-and gamma-tocopherols were separately prepared in ethanol using analytical standards of these compounds (Supelco, Sigma-Aldrich, Bellefonte, CA, USA) and 6-point calibration curves (0.25-25 mg/L for alpha-tocopherol, 1-50 mg/L for delta-tocopherol and 5-500 mg/L for gamma-tocopherol) were drawn. Quantification of the tocopherols in the samples was performed using these curves. Fatty acid methyl esters (FAMEs) were prepared according to the AOCS Official Method Ce 2-66 (AOCS 1997). Accordingly, 0.2 g of fig seed oil sample was dissolved in 2 ml of n-hexane and treated with 0.2 ml of methanolic potassium hydroxide solution. After mixing vigorously and waiting for phase separation (30 min), clear upper layer was taken with a micro-syringe and injected to the injection port of the gas chromatography apparatus (Agilent 7820 A, Santa Clara, USA) equipped with flame ionization detector. FAMEs were separated on a capillary column (Agilent Technologies, DB-FATWAX UI, 30 m  0.25 mm i. d., 0.25 mm film thickness, Santa Clara, USA). The injection volume was 1 lL with a split ratio of 1:100 and the carrier gas was hydrogen at a flow rate of 1.4 ml/min. The column temperature was programmed to 50 C for 2 min and increased to 174 C for 14 min at 50 C/ min and then increased to 215 for 25 min at 2 C/min. The temperatures of the injector and the detector were 250 and 280 C, respectively. Peaks in the chromatogram were identified by comparison of their retention times with those of standard methyl esters (Supelco 37-component FAME Mix, Bellefonte, PA, USA). Table 1. Total aflatoxin concentrations in the fig lots I, II and III were 320.1, 105.4 and 279.6 mg/kg, respectively. The amount of aflatoxins determined in the seedless part were higher than that determined in the seed oil. It was determined that 2.5%, 6.3% and 1.0% of aflatoxin B 1 detected in the lots I, II and III, respectively, were found in the seed oil. These values were determined as 3.7%, 10.0% and 2.5% for total aflatoxins. The results showed that fig seed oil as well as seedless part of the fruit could be contaminated with aflatoxins. The oils of some other crops such as corn, peanut, coconut, sunflower seed, olive, sesame and palm were also reported to be contaminated with aflatoxins in the past (Samarajeewa and Arseculeratne 1983, Ghitakou et al. 2006, Banu and Muthuray 2010, Elzupir et al. 2010, Shephard 2018. It is known that fungal spores can transfer to fig fruit by any means (wind, insects, fig wasp, etc.) and enter the fruit through its ostiole (Doster et al. 1996). It means that fungal colonization and aflatoxin formation mainly occur in the internal cavity of the fruit in which the seeds are located. The oil inside the seed might be contaminated with aflatoxins due to any physical damage (falling to the ground, bird-insect attack, etc.) that can occur in the seeds before and/or during drying of the fruit. A similar situation can occur when the seeds crack during homogenization of the fruit in the seed oil production. It should be also noted that the factors such as polarities of the toxins and their solubilities in oil are very critical for the transfer of the aflatoxins to the oil of the seed (Table 1). Effects of adsorbent treatments on removal of aflatoxins from aqueous extracts of dried figs The effects of different adsorbent treatments on the levels of aflatoxin B 1 and total aflatoxins in aqueous extracts of dried figs are shown in Figure 1. The treatments resulted in 61.53-97.31% and 55.11-95.77% reductions in the levels of aflatoxin B 1 and total aflatoxins, respectively. Activated carbon was the most effective agent in reducing aflatoxin levels in dried fig extracts. Over 95% reductions in the levels of aflatoxin B 1 and total aflatoxins were achieved with the use of this adsorbent at all tested concentrations and no significant differences were found among the concentrations (p > 0.05). Likewise, after activated carbon treatments, about 99% reductions were recorded for aflatoxins (Diaz et al. 2003) and also for other mycotoxins such as zearalenone (Bueno et al. 2005) and ochratoxin A (Fernandes et al. 2019). No doubt the effectiveness of activated carbon in binding mycotoxins is due to its extremely large surface area and highly porous structure (Galvano et al. 2001). Lemke et al. (2001) claimed that activated carbon adsorbs many compounds including aflatoxins primarily by hydrogen bonding. Bentonite is another effective agent in reducing aflatoxin levels in aqueous dried fig extracts (Figure 1). Treating with this agent resulted in minimum 89% reductions in levels of both aflatoxin B 1 and total aflatoxins in aqueous dried fig extracts. Aflatoxin reduction rates obtained with bentonite and activated carbon treatments were not significantly different in most cases (p > 0.05). Similarly, Diaz et al. (2003) and Fernandes et al. (2019) indicated that both bentonite and activated carbon were quite effective in aflatoxin removal from model systems yielding close reduction rates. Bentonite, a naturally occurring clay, has a layered structure which mainly consists of montmorillonite (Senturk et al. 2009). It was reported that this clay was effective in reducing aflatoxin M 1 levels in milk (Carraro et al. 2014) and aflatoxin B 1 levels in model systems (Tabari et al. 2018). Wang et al. (2020) claimed that bentonite has active sites within its interlamellar region for aflatoxin sorption. According to the authors, the carbonyl moiety in the aflatoxin molecule is important for binding. On the other hand, Phillips et al. (2019) claimed that other mechanisms such as electron donor acceptor complexation, ion-dipole interaction, and coordination between exhange cations and the carbonyl oxygens can also lead to the aflatoxin binding. It was shown that the adsorption strength of bentonite strongly depends on its mineral content (Galvano et al. 2001) and the pH of the media (Bueno et al. 2005). According to our results, aflatoxins in fig extracts could also be effectively reduced by marl treatment (Figure 1). Reductions of 76.23-93.79% and 66.94-89.60% in the levels of aflatoxin B 1 and total aflatoxins, respectively, were recorded after treating the extracts with this agent. Reduction rates increased with increasing the concentration of the agent (p < 0.05). At the treatment concentration of 5.0%, the aflatoxin reduction rates obtained with marl were not significantly different than those obtained with activated carbon and bentonite (p < 0.05). Marl, a calcareous clay containing a high amount of calcium carbonate, is conventionally used for deacidification of grape must (Karababa and Develi Isikli 2005). It lowers the acidity by precipitating tartaric acid and malic acid as calcium tartrate and calcium malate, respectively. This causes an increase in the pH of the must from around 3.5 to 5.0-6.0 (Rezaei et al. 2020). In the past, reductions in aflatoxin levels were observed during processing of the must, especially after treating with marl (Bahar andAltug 2009, Heshmati et al. 2019). However, it was not clarified that if the reduction was caused as a result of an adsorption effect or due to structural change of the toxin caused by alkalization of the medium. PVP was the only synthetic compound tested in this study. PVP and its cross-linked form polyvinylpolypyrrolidone (PVPP) are efficient adsorbents which are widely used in fruit juice industry as fining agents. As can be seen from Figure 1, PVP treatments of fig extracts resulted in 67. 36-77.23% and 56.92-66.89% reductions in the levels of aflatoxin B 1 and total aflatoxins, respectively. Previously, it was reported that PVPP and PVP were effective in reducing fumonisin levels in red wines (Carrasco-Sanchez et al. 2017) and zearalenone levels in model systems (Alegakis 1999), respectively. Moreover, these compound were regarded as "promising" due to their high performance in reducing aflatoxin toxicity when used as feed additives (Celik et al. 2000, Stroud 2007. Our results showed that gelatin treatment had the most limited effect on reducing aflatoxins in fig extracts (Figure 1). This treatment resulted in 61.53-66.78% and 55.11-58.07% reductions in the levels of aflatoxin B 1 and total aflatoxins, respectively. Close results (57.6-78.0% reductions) were reported by Heshmati et al. (2019) who treated grape musts with gelatin. Lasram et al. (2008) reported that ochratoxin A levels were reduced by approximately 58% after treating red wines with a gelatin solution of 0.1 ml/L. Gelatin is a water-soluble and gel-forming hydrocolloid. It was assumed that mycotoxins could be trapped and/or adsorbed within the gel network formed by the gelatin (Heshmati et al. 2019). Effects of adsorbent treatments on physicochemical characteristics of aqueous extracts of dried figs The effects of various adsorbent treatments on the physicochemical characteristics of aqueous extracts of dried figs are given in Table S1. The Brix values of dried fig extracts were 21.7 ± 0.5 at the beginning and varied between 21.1 ± 0.0 À 22.3 ± 0.3 after adsorbent treatments. Statistical analysis showed that adsorbent Figure 1. The effects of treating aqueous extracts of dried figs with various adsorbents on (A) aflatoxin B 1 levels and (B) total aflatoxins levels. All the error bars were calculated based on the standard deviation of three measurements. Different letters shown with lower case indicate significant differences among the concentrations of a specific adsorbent (p < 0.05). Different letters shown with upper case indicate significant differences among the adsorbents at a specific concentration (p < 0.05). The gelatin doses tested were 0.1, 0.5 and 1.0% due to the gelling problem at higher doses. treatments did not have any significant effect on the Brix values of the extracts (p < 0.05). It shows that the adsorbents we tested did not adsorb sugars which constitute a substantial proportion of the total soluble material in the extract. There are contradictory results in the literature about the effect of adsorbents on Brix values of various juice samples. For instance, the Brix values decreased after treating apple juice with activated carbon (Gokmen et al. 2001, Coklar andAkbulut 2010) and increased after treating sour cherry juice with calcium and potassium carbonates (Yesiloren and Eksi 2015). Gulcu (2008) reported that the Brix values of grape juice decreased after bentonite treatment and did not change after gelatin treatment. We observed that pH and titratable acidity values generally reduced after adsorbent treatments. These values were most affected by marl treatment. A drastic increase in pH (from 4.39 to 5.97) and a sharp decrease in titratable acidity (from 1.39 to 0.38 g/ 100 g) were observed after marl treatment of the extracts. The initial antioxidant activity and total phenolic content of dried fig extracts were 3.06 ± 0.07 g trolox/ kg and 1.89 ± 0.36 g GAE/kg, respectively (Table S1). Activated carbon and bentonite treatments at concentrations of 1.0% and 5.0% significantly reduced antioxidant activity and total phenolic content of the extracts. Likewise, significant decreases were observed in total phenolic contents of activated carbon-treated apple juice (Gokmen et al. 2001) and bentonite-treated grape juice (Gulcu 2008). Coklar and Akbulut (2010) recorded 20-75% reductions in total phenolic contents of apple juice after treating with activated carbon at concentrations of 0.5-3.0 g/L. In the present study, gelatin treatments significantly reduced the antioxidant activity of dried fig extracts. Similar results were also observed in grape juice (Gulcu 2008) and pomegranate juice (Erkan-Koc et al. 2015) after gelatin treatments. Effects of adsorbent treatments on removal of aflatoxins from fig seed oil The effects of different adsorbent treatments on the levels of aflatoxin B 1 and total aflatoxins in fig seed oil are shown in Figure 2. At 0.1% concentration, PVP was the most effective adsorbent in reducing aflatoxin levels in fig seed oil (Figure 2). Treating contaminated oil samples with PVP at 0.1% resulted in 15.0% and 26.5% reductions in the levels of aflatoxin B 1 and total aflatoxins, respectively. On the other hand, when used at 5.0% concentration, activated carbon and marl were the most effective adsorbents in reducing aflatoxin levels in fig seed oil. These adsorbents resulted in 88% and 86% reductions in the levels of aflatoxin B 1 and total aflatoxins, respectively. Bentonite was another effective adsorbent with reduction rates of 77.1% and 76.7% in the levels of aflatoxin B 1 and total aflatoxins, respectively. PVP and gelatin had a relatively limited effect in reducing aflatoxin levels in fig seed oil. PVP and gelatin treatments caused 26.5% and 14.4% reductions, respectively, in aflatoxin B 1 levels in fig seed oil while total aflatoxin reductions observed after these treatments were below 20%. Chromatograms (overlapped) showing aflatoxin peaks in aqueous extracts of dried figs before and after treatment with various adsorbents at a concentration of 5% can be seen Figure 3. It is known that coconut, olive, palm, peanut and maize are among the products that are risky for aflatoxin contamination. They all have high oil content All the error bars were calculated based on the standard deviation of three measurements. Different letters shown with lower case indicate significant differences among the concentrations of a specific adsorbent (p < 0.05). Different letters shown with upper case indicate significant differences among the adsorbents at a specific concentration (p < 0.05). The gelatin doses tested were 0.1, 0.5 and 1.0% due to the gelling problem at higher doses. and can be used as raw materials for edible oil industry. In this case, it is likely that the oils of these products can also be contaminated with aflatoxins. As a matter of fact, it was reported that coconut oil (Samarajeewa and Arseculeratne 1983), olive oil (Ghitakou et al. 2006) , sunflower oil, sesame oil, peanut oil (Elzupir et al. 2010), canola oil (Nabizadeh et al. 2018) and some other seed oils (Bhat and Reddy 2017) can be contaminated with high levels (above the legal limits) of aflatoxins. Kamimura et al. (1986) reported that the levels of aflatoxins can be reduced during edible oil production, especially due to oilrefining process. The adsorbents used for removal of undesired substances (trace elements, gossypol, pesticides, wax, gums, etc.) in the bleaching step during refining are likely effective on also aflatoxins and other mycotoxins. Although it has been known that many types of oilseeds can be contaminated with mycotoxins and these toxins can transfer to the oil of the seed, the number of studies focusing on the removal of mycotoxins by adsorbent treatments are very limited. However, in recent studies, newly functionalized adsorbents have been tested for removal of zearalenone from corn oil (Bai et al. 2018) and aflatoxin B 1 from rice bran oil (Ji and Xie 2020). In 2017, Ma et al. conducted a study with contaminated peanut oil samples and determined that montmorillonite was quite effective in reducing aflatoxin B 1 level (Ma et al. 2017). Compatible with this result, we observed that bentonit, of which the major clay mineral is montmorillonite, could reduce the aflatoxin B 1 level in fig seed oil by more than 75%. It is obvious that the effectiveness of the adsorbents in aflatoxin removal from aqueous and oily media can be quite different. This can be easily seen from the results of the present study. For instance, activated carbon at 0.1% was quite effective in reducing aflatoxins causing 96.85% and 94.93% reductions in the levels of aflatoxin B 1 and total aflatoxins, respectively, in aqueous extracts of dried figs whereas these rates were only 7.11% and 9.67%, respectively, in fig seed oil. When this agent was used even at 5.0%, the aflatoxin reduction rates obtained in fig seed oil were below 90% which were lower than that obtained in the aqueous extracts. shows that activated carbon could adsorb the compounds such as chlorophylls and carotenoids that contribute to the color of the fig seed oil. Similarly, decreases in the color values of fish oil (Monte et al. 2015) and soybean oil (Udomkun et al. 2018) were observed after activated carbon treatments. Before treatments, antioxidant activity and total phenolic content of fig seed oil were 103.31 ± 2.97 g trolox/kg and 0.51 ± 0.09 g GAE/kg, respectively (Table S2). It was observed that the antioxidant activity of fig seed oil generally decreased after adsorbent treatments and these decreases were more pronounced when higher concentrations of adsorbents were treated. Activated carbon treatment at 0.1% and bentonit treatments at both 0. Table S3. The initial contents of gamma-, delta-and alphatocopherols in fig seed oil were 4060.6 ± 156.6, 155.1 ± 4.8 and 40.4 ± 1.5 mg/L, respectively. The contents of any tocopherols did not change significantly after marl, PVP and gelatin treatments (p > 0.05). However, activated carbon and bentonite treatments at 5.0% resulted in significant reductions in the contents of all tocopherol isomers tested (p < 0.05). Reductions in the contents of gamma-, delta-and alpha-tocopherols were 61.8%, 27.7% and 44.1% after activated carbon treatment and 20.8%, 21.3% and 18.5% after bentonite treatment, respectively. Shi et al. (2017) reported that gamma-tocopherol content of sesame oil decreased after activated carbon treatment. Similar decreases were also observed in tocopherol contents of rice bran oil after treatments with magnetic graphene composite adsorbents (Ji and Xie 2020). The effects of various adsorbent treatments on fatty acid composition of fig seed oil are given in Table S4. Our results showed that adsorbent treatments significantly affected fatty acids with 18 C-atoms. For instance, increases in polyunsaturated fatty acids (linoleic acid [18:2] and linolenic acid [18:3]) and decreases in monounsaturated (oleic acid [18:1]) and saturated (stearic acid [18:0]) fatty acids were observed. This finding shows that the adsorbents tested in the present study could adsorb saturated and monounsaturated fatty acids rather than polyunsaturated fatty acids. It might be due to the higher fluidity and thus better mobility of the latter compared to the former fatty acids. Remarkable changes in the fatty acid composition of various oil samples after different adsorbent treatments were also observed in previous studies (Shi et al. 2017, Ji andXie 2020). Conclusions The aflatoxin contamination of dried fig samples used in the present study is quite higher than the international regulatory limits set for products for human and even animal consumption. There is a need for safe, practical, reliable and economic techniques to effectively reduce aflatoxins and lower their concentrations below regulatory limits. Among other treatments, adsorption seems as a promising technique for aflatoxin removal as it does not cause any change in the toxin structure and does not form any toxic by-or degradation-products. To the best of our knowledge, this is the first report demonstrating the presence of aflatoxin contamination in fig seed oil. Although the aflatoxin amounts determined in the oil are lower than those determined in the seedless part of the fruit, it can still pose a serious health hazard to the consumers. It was observed that the effectiveness of the adsorbents in reducing aflatoxin levels could substantially vary in aqueous and oily media. Activated carbon, bentonite and marl were the most effective adsorbents resulting in about 90% reductions in the aflatoxin levels. However, significant changes in physicochemical characteristics and antioxidant potentials were observed after adsorbent treatments. Optimization studies are required to determine the right concentration of the adsorbents to obtain maximum aflatoxin reduction and minimum loss in the quality of the product.
v3-fos-license
2019-06-13T13:24:44.445Z
2019-01-01T00:00:00.000
189465241
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-14931-4_6.pdf", "pdf_hash": "05e3a16f324b91c179e2eda72a7307716a8157c2", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41649", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "2b06832c17cb878232e6240b59ee0b5a422a75b2", "year": 2019 }
pes2o/s2orc
Precision, Priority, and Proxies in Mathematical Modelling In recent years, scholars have moved away from “modelling as a vehicle” to learn mathematics approaches and have instead emphasized the value of modelling as content in its own right. This shift has raised tensions in how to reconcile authentic mathematical modelling with curricular aims. The aim of the research study reported in this chapter is to explore one aspect of this tension: the divergence of student thinking from the task-writer’s intentions. Analysis of task-based cognitive interviews led to two interrelated findings: participants’ choices did not lead to intended solutions (nor to curricular objectives) and participants’ choices were guided by their giving priority to variables and assumptions that aligned with their desire to reflect precision and complexity of their lived experiences of the task situations being modelled. Two common interpretations of such findings are to fault the participants as incapable of applying their knowledge to solve the problems or to fault the tasks as being inauthentic. I use actor-oriented theory of transfer to reconcile these opposing views. Introduction Historically, scholars understand there are "two fundamentally different purposes when teaching mathematical modelling" (Stillman et al. 2016, p. 283) in the classroom (Julie and Mudaly 2007;Niss et al. 2007). One is to use "modelling as a vehicle for facilitation and support of students' learning of mathematics as a subject" (Niss et al. 2007, p. 5). The other is to learn mathematics "so as to develop competency in applying mathematics and building mathematical models" (Niss et al. 2007, p. 5). These authors stressed that these approaches are not a dichotomy, meaning neither tasks nor facilitators' intentions in using the tasks must be classified as one or the other. Though the role of mathematical modelling in achieving curricular aims has both amplified in recent years (e.g. National Governors Association Center for Best Practices and Council of Chief State School Officers 2010; Niss and Hojgaard 2011;OECD 2017) and undergone attempts at standardization (e.g., Bliss et al. 2016), in many classrooms the emphasis remains on the teaching of modelling as a vehicle for teaching mathematical concepts and processes. A wide variety of tasks are used to further curricular aims, ranging from word problems, to application problems, to original projects (Blum and Niss 1991). Using such tasks to plan and sequence learning trajectories for students means the tasks must have intended solutions, a predetermined strategy, heuristic, process, or outcome that aligns with mathematical learning objectives. In this chapter, I focus on tasks which were designed to further specific curricular aims and therefore have intended solutions from the task setter's perspective. However, one challenge in using modelling in classrooms is that the solution of a modelling task is not inherent to the task itself (Czocher 2015;Murata and Kattubadi 2012;Schwarzkopf 2007). For example, Manouchehri and Lewis (2017) reported on 1000 middle school students' solutions to the word problem Which is the best job option, one that pays $7.50/hour or one that pays $300/week? The task is used to address the topic of linear equations. The intended solution is to formulate two linear equations, y 7.5x and y 300 and seek their intersection. Since x 40 at the intersection, the two job offers are supposed to be equivalent. However, the intended solution only makes sense under two implicit assumptions: (i) only the number of hours worked per week matters (ii) 40 h per week is expected. The students in the study did not operate under these assumptions. They considered issues like the cost of transportation, health care benefits, and whether or not full-time employment was feasible. These considerations do not lead to the intended solution, but they are not "wrong." The difference between student reasoning and the intended solution can be accounted for in terms of socio-mathematical norms developed at school. Watson (2008) argued that school mathematics is its own discipline, and therefore is apart from professional mathematics. For example, some word and applications problems can be solved by referring to semantic cues, without any reference to mathematics or the story in the problem (Martin and Bassok 2005). For students and teachers this may mean that modelling devolves into a search for official formulas, recalling a similar problem from class, or attending only to keywords. Even in a laboratory setting, students working on problems couched in a real-world context can be influenced by the expectations of school mathematics to give "more legitimate" solutions based on known formulas (Schoenfeld 1982b). Similarly, Julie and Mudaly 2007) hypothesized that teachers express a preference for models that are relevant to their immediate circumstances. Thus, a typical response to students using their "real world" reasoning, like those in Manouchehri and Lewis's (2017) study, might be to dismiss it as incorrect in order to refocus the student toward the intended solution. While this option may lead to short-term success, it can also have long-term consequences. Students who receive consistent negative feedback may learn to respond to problems in ways consistent with the expectations of "school mathematics" rather than with their own reasoning (see, for example, Engle 2006). Indeed, the literature is full of examples of students who generalize their own (non-mathematical) rules based on this kind of feedback or who do not check whether their own responses make sense (Erlwanger 1973;Greer 1997;Schoenfeld 1982aSchoenfeld , 1991Verschaffel et al. 2000). These lines of inquiry have influenced research into the teaching and learning of modelling. Scholars have shifted their focus onto students' current knowledge and understanding (Blum and Borromeo Ferri 2009;Doerr 2006;Schukajlow et al. 2015;Stender and Kaiser 2015;Wischgoll et al. 2015). While a socio-mathematical perspective articulates the tension between school mathematics and student thinking, it does not yet account for how students might make sense of modelling tasks that are used to further curricular aims. Some work still needs to be done on how to anticipate what students might suggest and how to productively interpret those suggestions. The purpose of this study was to explore the ways in which students' ways of reasoning might diverge from the intended solutions of the task setter who aims to provide students with experiences addressing particular curricular objectives. Empirical and Theoretical Background There are many theoretical perspectives on the nature of mathematical modelling and what it entails. Kaiser (2017) provides a recent and comprehensive survey. One perspective is termed a cognitive approach because it foregrounds mathematical thinking and emphasizes analysis of students' modelling processes. Since the main goals of the cognitive approach are to reconstruct individuals' modelling routes or to identify difficulties encountered by students during their modelling activities (Kaiser 2017), it is an appropriate approach for studying how student reasoning diverges from intended solutions while working on tasks with intended curricular aims. In the cognitive view, modelling is a process that transforms a non-mathematical question into a mathematical problem to solve. A model is then a conceptual correspondence between real-world entities and phenomena and a mathematical expression. The modelling process can be decomposed into a series of cognitive and mathematical activities (e.g. Blum and Leiß 2007;Maaß 2006) which replace a realworld system with a mathematical interpretation that can be analysed mathematically. Results are then interpreted in terms of real-world constraints and assumptions and the model is modified if necessary. Simplifying/structuring and mathematizing are central to setting up the mathematical problem to solve. They are most challenging to carry out (Galbraith and Stillman 2006;Stillman et al. 2010). Simplifying/structuring includes identifying conditions and assumptions from the real-world context, establishing variables, and acknowledging that some variables or constraints are unimportant. Mathematizing refers to introducing conventional representational systems (e.g., equations, graphs, tables, algorithms) to present mathematical "properties and parameters that correspond to the situational conditions and assumptions that have been specified" (Zbiek and Conner 2006, p. 99). The cognitive approach highlights the role individuals' prior knowledge and decision-making play in mathematical modelling. Stillman (2000) reported on a tripartite framework distinguishing three knowledge sources used by secondary students during mathematical modelling: academic, episodic, and encyclopaedic. Each knowledge source derives from the individuals' prior experiences. Academic knowledge derives from the study of academic topics (e.g. linear equations, kinematics). Encyclopaedic knowledge is general knowledge about the world (e.g. that one ought to check for traffic before crossing a road). Episodic knowledge is truly personal and experiential (e.g. recalling a ride to the top of the Empire State Building on a recent trip to New York City). Most reasoning during mathematical modelling occurs as a blend (Fauconnier and Turner 2003) of real-world knowledge and mathematical knowledge (Czocher 2013). Yet, Stillman (2000) found that episodic knowledge has a stronger influence on mathematical modelling than the other two forms of knowledge, suggesting that students draw more from their personal experiences than from what they learn in other subject areas or general world knowledge. Therefore, how individuals engage in modelling depends as much on their prior non-mathematical experiences as an on their mathematical knowledge. Yet knowledge on its own is not a good predictor of task performance. Research from a long line of inquiry into transfer of knowledge has demonstrated that possessing relevant knowledge of mathematics or of the modelling task context is not sufficient for addressing the task (Nunes et al. 1985;Verschaffel et al. 2000). Equally important are whether the individual brings her knowledge to bear on the task and the decisions she makes about how to use that knowledge. Specifically, because modelling involves generating idealizations of the real world situation (Borromeo Ferri 2006), any decision made by the modeller to simplify the problem filters, and is filtered by, the individual's knowledge sources. As the study of Manouchehri and Lewis (2017) shows, differences between the intended solution and the students' ideas are not limited to the peculiarities of school mathematics-they depend on students' encyclopaedic and episodic knowledge. In their Job Problem, the intended solution assumes that the only meaningful variable is number of weekly hours worked. The students raised issues based on their encyclopaedic and episodic knowledge; they wished to consider health care benefits and ease of transportation. Considering these important variables necessarily changes the mathematics used. For example, if transportation is the most important factor (rather than hours worked) an individual should choose the job she can get to reliably rather than the job she cannot get to at all. The interdependency of phases of modelling with individuals' knowledge leads to idiosyncratic and non-linear individual modelling routes (Ärlebäck 2009;Borromeo Ferri 2006Czocher 2016). The term idiosyncratic responses means that making sense of, or responding to, student work on modelling tasks, even in tasks purportedly as straightforward as those with intended solutions, is difficult. For example, Schoenfeld (1982b) asked undergraduate mathematics majors to estimate the number of cells in an adult human body. The intended solution was a "ballpark estimate," based on the assumption that a human is shaped roughly like a cylinder and crude estimates of the cylinder dimensions. Instead, the participants sought increasingly finer estimates of the volume of the human body, without pausing to evaluate their own productivity. Since the marginal increases in precision for measurements of human volume would not have impacted the cell estimate substantially, Schoenfeld interpreted the students' work as an example of metacognitive failure. A teacher's response in this situation might also have been to classify the student's work as incorrect because the student did not use the intended strategies. However, given the relative scale of human volume to cell size, the students' activity can be understood as sensible. Likewise, a student who answers the Job Problem with the question "Is there a bus stop at both jobs?" might be considered to be evading the mathematical problem. The foregoing discussion raises questions about how to interpret students' thinking on modelling tasks productively. In particular, we can wonder: How does learners' real-world knowledge guide their selection of relevant variables and assumptions? and Are there productive ways to frame students' choices that can guide facilitators? To answer such questions, it is necessary to examine students' modelling behaviour within the task environments that they may encounter in classrooms from a perspective that assumes the students' responses are sensible. To study how student work diverges from intended solutions, I selected Lobato's (2006Lobato's ( , 2012 actor-oriented theory of transfer as a theoretical lens. This is an appropriate choice because from a cognitive perspective, the modelling process is conceived as a blend of disparate knowledge bases, implying that some form of transfer of knowledge to a novel setting occurs. Viewing individuals' knowledge as experiences then allows examination of how "rational operations emerge from experience" (Jornet et al. 2016, p. 290). That is, actor-oriented theory begins from the perspective that students' activities are sensible. As a premise, actor-oriented theory distinguishes between an actor's perspective and an observer's perspective. Thus, there is a natural mapping between the (actor, observer) pair to the (student work, intended solution) pair. In the language of actor-oriented theory, "taking an observer's point of view entails predetermining the particular strategy, principle, or heuristic that learners need to demonstrate in order for their work on a novel task to count" (Lobato 2012, p. 245). In contrast, from an actor's point of view, the researcher investigates how the student's prior experiences shaped their activity in the novel situation, even if the result is non-normative or incorrect performance (Lobato 2012). In summary, "solutions which might be viewed as erroneous from a disciplinary perspective, are treated instead as the learner's interpretation" of the task (Danish et al. 2017). In this way, the operational definition for intended solution becomes a "predetermined particular strategy, principle, or heuristic" and the focus of the present study is on how the participants interpret the task situation. Under actor-oriented theory, the authenticity of a task is determined by the extent to which the task context aligns with, and is amenable to, the participants' lived experiences. Thus, modelling problems are those that permit students to bring their knowledge to bear in defining their own variables and introducing their own assumptions. The actor-oriented theory of transfer can be applied to modelling because it acknowledges that knowing and representation are products of how the student interprets the task situation and that the selection of ideas need not be intentional (Jornet et al. 2016;Lobato 2012). Within modelling, structuring refers to imposing mathematical structure on a real-world situation. This is accomplished through introducing variables and parameters which measure attributes of entities in the real world (Thompson 2011). Real-world conditions and assumptions are also identified. The variables, parameters, conditions, and assumptions are then put in relation to one another, using mathematical objects, their properties and structures, and relations and operations to join them. As discussed above, each of these activities depends on the individual modeller's current interpretations and prior experiences. The idea that structuring is an active process carried out by the modeller, rather than a passive process where an inherent structure is present in a situation and then discovered and extracted, is also emphasized in the actor-oriented theory perspective. To study how individuals' models may diverge from intended solutions, an analytic framework capable of capturing student decision making while tracing the intended solution was needed. The framework needed to allow me to document how the participants defined a mathematical problem from a nonmathematical one. The process is not straightforward and there are many cognitive obstacles within it (Galbraith and Stillman 2006). Since the process includes anticipating the mathematical structures and procedures that could be used and then implementing that anticipation, the framework needed to include identifying, prioritizing, and mathematizing appropriate variables, conditions, and assumptions (Czocher and Fagan 2016;Niss 2010;Stillman and Brown 2014). The analytic framework, summarized in Fig. 6.1, zooms in on the simplifying/structuring phase of modelling (see Blum and Leiß 2007). The framework is appropriate because each successive step is a site where the modeller's choices may diverge from the intended solution. Therefore, the framework allows for divergence to be documented as described below in the methods section and allows for the research questions to be addressed. Methods I conducted a laboratory-based study of how student thinking diverged from intended solutions on tasks with intended curricular aims. Data Collection Data were generated via a set of one-on-one task-based interviews with twelve students enrolled in high schools (8) and universities (4) from different states in the Letter Carrier (Swetz and Hartzler 1991) A letter carrier needs to deliver mail to both sides of the street. She can go to all the boxes on one side, cross the street, and deliver to all the boxes on the other side. Or she can deliver to one box, cross the street, deliver to two boxes, cross and deliver to two boxes and so on until all the mail has been delivered. Which is the best route? 4 post algebra, 3 algebra The Cell Problem (Schoenfeld 1982b) Estimate how many cells might be in an average-sized adult human body. United States. There were four participants from each of the following levels: high school algebra, post-algebra (high school geometry and calculus), and undergraduate differential equations. The purpose of including mathematically and geographically diverse students in the sample was to explicitly seek similarities in their ways of approaching the problems, not to treat them as comparison groups. This study examines student work on the four tasks presented in Table 6.1. As shown, the tasks were drawn from prior research and research-based educational materials. Tasks were appropriate to each student's mathematical level and each had a clear curricular objective, that is, mathematics content that would be brought out if the student carried out the task writer's intended solution. However, the tasks were presented in a way that allowed the participants to generate their own variables and assumptions. In this way, each task would allow me to trace the cognitive pathways learners might take which would reveal the tensions between student thinking and the intended solution. At the start of each session, the participant was presented with a task and asked to read it aloud. I assured participants that they would not be graded as I was interested only in their thinking. Participants worked for as much time as needed to come to a conclusion (usually within 30 min). Follow up questions focused on understanding how important the students' choices for variables and assumptions were to them. In this way, the interviews elicited the students' mathematical thinking as they engaged in the modelling tasks, not on guiding the student to an intended solution. Data Analysis The participants generated 24 sessions, which were transcribed. Analysis focused on how students defined a mathematical problem to solve by decomposing student work according to the analytic framework ( Fig. 6.1) and comparing their work to the intended solution for each task. Each student's work on each task was analysed for whether they engaged in mathematical modelling, what variables and assumptions were identified (mentioned explicitly), whether they were prioritized (designated as being important to the model), and whether or not they were mathematized (represented mathematically). "Variables" designated independent and dependent variables, parameters, or constants that referred to measurable attributes of a physical entity (see Thompson 2011). "Assumptions" were defined as constraints of the realworld situation that participants identified explicitly or implicitly as impacting the values of, or relationships, among variables of interest. To understand how student-generated models diverged from the solutions envisioned by task writers, I examined the extent to which student-generated variables and assumptions differed from those in the task writers' intended solution. In the intended solutions, I classified a variable or assumption as identified under two conditions: (1) if it was mathematized or (2) if the intended mathematisation necessitated that a variable or assumption be ignored. An outline of the intended solutions, along with intended variables and assumptions, and curricular objectives (aligned with CCSSM 2010) follows: Letter Carrier: Assume a straight road with length l and width w. Assume that the street has n evenly spaced mailboxes on each side of the street, that the mailboxes are directly across from one another, and that they are at the centre of each lot. Let d A and d B be the distance travelled along the first and second paths, respectively. Then d A 2l + w − l/n and d B nw + l. We find that d A d B when l/n w or when the width of the road is equal to the width of each lot. As long as w < l/n, the second path will be shorter. Curricular objective: linear equations, working with variables and parameters. Assumptions: the road is straight, mailboxes are equally spaced, mailboxes are directly across from one another, mailboxes are at the centre of each lot, there are an equal number of mailboxes on each side, the "best" route has the shortest distance. Variables: number of mailboxes, length of the street, width of the street, total distance travelled. Cell Problem: Assume cells are cubes whose dimensions are approximately 1/5000 of an inch on a side. Assume a human is a box with dimensions 6 × 6 × 18 . Curricular objective: proportions, rates, estimation. Assumptions: cells are shaped like cubes, humans are shaped like boxes, cells are packed inside of humans. Variables: cell side length, human height, width, depth, number of cells. Water Lilies: Since the number of water lilies doubles every day, on day N − 1 there are half as many lilies as on day N. Therefore, on day 29 there are half as many lilies as on day 30. Since the lake is covered on day 30, the lake was half covered on day 29. Curricular objective: exponential growth. Assumptions: each lily produces one new lily during the growth period. Variables: growth period, growth rate, final time. Empire State Building: For an object moving at a constant rate, distance is speed multiplied by time: d r × t. Estimate the height of the Empire State Building, the speed of the elevator, and solve for t. In order to use this model, one must implicitly assume that the elevator makes no stops and that its speed is constant. The latter is reasonable if r is taken to be the average velocity over the duration of the ascent. Curricular objective: rates, linear equations. Assumptions: elevator makes no stops, moves at constant speed. Variables: height of building, rate of elevator, time elapsed. To understand how students handled the variables and assumptions they generated, I listed all variables and assumptions referenced by each participant on each task. The result was the set of variables and the set of assumptions identified on each task collectively by all participants. Note that some participants generated more than one model on a given task. I then tabulated the frequency that each variable and assumption was referenced across all participants: how many times a variable or assumption was identified, how many times it was prioritized for inclusion in a mathematical representation, and the number of times it appeared in a mathematical representation. If a variable or assumption was mathematized, it was assumed to have been prioritized. For example, if a participant included "height of the Empire State Building" in her representation but never stated it verbally, it was assumed to have been both identified and prioritized. Next, I calculated the following percentages from the analytic framework: % identified # times identified # participants who worked on the task % Prioritized # times prioritized # times identified % Mathematized # times mathematized # times prioritized Results Data analysis led to two interrelated findings: (a) participants' choices did not lead to the intended solutions and (b) their selection of relevant variables and assumptions reflected their desire to represent complexity (rather than to simplify). In elaborating these findings below I show how their episodic and encyclopaedic knowledge influenced their mathematical choices via examples of how their work diverged from the intended solutions. No participants produced the intended solution for the Cell Problem or the Letter Carrier Problem. None produced the intended solution for the Water Lilies Problem on their first attempt. All identified important variables and assumptions for the Empire State Building Problem, but offered additional variables and assumptions which led to unintended solutions. Since the participants' work did not match the intended solutions, there are two straightforward interpretations I will refute. First, the problems were too hard for the participants. Second, the participants failed to transfer their mathematical knowledge to a novel, real-world problem. Closer inspection of the data revealed that neither interpretation is accurate. Indeed, in every case the participants used well-known standard mathematical structure (e.g., linear equations, proportions, etc. Niss et al. 2007) even though their work was idiosyncratic and sometimes ad hoc. Thus the participants' models were, in most cases, completely reasonable given the variables and assumptions they identified and prioritized. Table 6.2 displays the number of assumptions and variables identified by the participants on each task and compares it to the number of assumptions and variables in the intended solution. On all problems, participants collectively identified more variables and assumptions (except for Letter Carrier) than were in the intended solutions. This fact (i) implies that the participants knew enough about the task situations to gain entry to the problems (ii) demonstrates that participants had little difficulty in this stage of modelling (iii) suggests that they were engaged in the problems, and (iv) relied on real-world knowledge to help them analyse the situations. These four inferences together refute the interpretation that the problems were too hard. On the contrary, participants tried to include all variables and assumptions that could impact the selected dependent variable. For example, on the Letter Carrier Problem, 4/7 participants discussed various street layouts and mailbox arrangement and how each would impact the letter carrier's path (see Fig. 6.2). One participant explicitly assumed that the letter carrier did not skip any houses (otherwise, to her, the second path would not make any sense at all). Another noted that mailboxes could be arranged directly across from one another (as in the intended solution) or they could be grouped together in a common area where all residents could retrieve their mail. On the Empire State Building Problem, all participants identified the intended variables: speed of the elevator, height of the building, and time elapsed. However, they also identified additional factors affecting the time of ascent: acceleration, weight, the number of stops made, how long it takes for people to load and unload. Along with these went a variety of assumptions and observations such as whether or not floors below the observation deck were open to tourists or whether the rate of ascent would be constant. In this way, the students treated the tasks authentically, based on their episodic knowledge of streets and mailboxes and elevators. On the Cell Problem, only one participant (an undergraduate) gave a ballpark estimate. Instead, participants were concerned about cell shapes and sizes varying over the body, rather than the shape or size of the body. They noted that bones and various organs were made up of different kinds of cells. Some mentioned that nerve cells could be a metre long whereas reproductive cells were much smaller. These concerns signal an unease in accepting that a set of measurements which vary can be replaced by the average of those measurements, which the intended solution expects. These observations both support and challenge the finding of Schoenfeld (1982a, b) that his participants were concerned with finding "more legitimate" solutions. The participants in this study identified sources of variation based on the function of the cells (rather than on location) and desired their models to reflect those sources of variation. This does not necessarily constitute a "wild goose chase" or metacognitive failure. Rather than assume some dimension of variation could be eliminated (or rendered irrelevant altogether in order to simplify the problem), the participants were driven by a desire for the model to accurately and precisely capture their real-world knowledge of the task situations. Similarly to other reports, participants identified variables and assumptions based on their episodic knowledge, their encyclopaedic knowledge (Stillman 2000), or through immediate relevancy to their lives (Manouchehri and Lewis 2017). This was evidenced by responses containing value statements or clarifying questions. For instance, responses included: (1) The letter carrier should take the simplest path. (2) Is there traffic? If so, the letter carrier should take the first path which is safer. (3) Does the letter carrier need to return to her vehicle? (4) Does the letter carrier have to visit every house? (5) What shape is the street? (6) What is the purpose of estimating all of the cells in the human body? It would make more sense to count T-Cells or heart muscle cells after a heart attack. (7) Should I count the non-human cells? (8) It makes more sense to time the elevator. (9) It depends on how big the lilies (lake) are. Whereas considerations like: Does the size of the lake matter? get right to the heart of the curricular objectives of a modelling problem that uses exponential growth, the others might be interpreted as attempts to avoid developing a model altogether. Others have suggested writing tasks that avoid this tendency (see, for example, Lesh et al. 2000). However, I offer an alternative interpretation: the participants were not necessarily "avoiding" the problem, but offering a logical, well-reasoned response based on their personal knowledge of the world and the heuristic "what would this situation actually look like?" Individuals develop heuristics for quickly handling decision-making in real-life situations (Gigerenzer 2008), which may support them in identifying important variables and assumptions for mathematical modelling. The participants' responses clarify the "rules" of the real-world situations described in the task statement and led sometimes to simplifying the situation to make it amenable to mathematical representation or at other times complicated it. The majority of variables identified on a task were also mathematized in at least one participant's representation (9/11 on the Letter Carrier Problem, 11/12 on the Cell Problem, 11/13 on the Water Lilies Problem, 9/11 on the Empire State Building Problem. This suggests participants' difficulties lay in selecting the most important variables and assumptions in order to fit them to known mathematical concepts. For example, all participants who worked on the Cell Problem observed that the density or arrangement of cells varied over body parts, all of these acknowledged that the observation was important, but no one was able to mathematize the assumption. One undergraduate progressed so far as describing something like a weighted average for the different organs in the body, but abandoned this strategy before producing a mathematical representation. Similarly, on the Letter Carrier Problem, participants intended to include the shape of the street and variation of mailbox placement because both of these variables impact distance travelled. As a consequence, only 2/7 (29%) of the students were able to mathematize distance. The majority of identified and prioritized variables did appear in at least one mathematical representation but this representation did not use the mathematics of the intended solution. For example, on the Letter Carrier Problem, two students focused on the arrangement of the mailboxes along the street because it would impact the total distance the letter carrier would travel. Their images of the street led to drawing a zig-zag path for the letter carrier to follow. Both created paths that would minimize distance between mailboxes, leading to mathematisation via the Pythagorean Theorem. These choices led to quadratic equations in two variables rather than the intended linear ones. In the Empire State Building Problem, one undergraduate participant gave the following mathematical model: where t was the total time, T(p) was the length of time it takes for people to enter (or exit), h was the height of the building, and v was the velocity of the elevator. He computed the length of time for people to exit as some per person rate times the number of people plus the length of time for the doors to open and close. The student transformed a problem about rate into a pair of affine linear equations depending on the number of people riding the elevator. In the intended solutions, many of the variables and assumptions identified, prioritized, and mathematized by the participants were assumed to be unimportant, leading to simper models. However it is not necessary or even necessarily natural for students to seek these simpler models. At the very least, the participants' choices led to mathematical concepts that were not the same as the curricular objectives of the tasks. And in these cases, the participants' models might be seen as "incorrect" when compared to the intended solutions. Interpretation and Discussion In this study, participants tended to prioritize variables and assumptions in order to authentically reflect the complexity they perceived in the situations. They did so regardless of whether the variable or assumption could be mathematized, regardless of the magnitude of its impact, or even whether the resulting mathematical problem could be analysed with their on-hand mathematical tools. Thus, as in other studies (e.g. Czocher 2013; Ikeda and Stephens 1998;Manouchehri and Lewis 2017) participants struggled to prioritize those variables and assumptions that could be mathematized using their current mathematics knowledge, over those that could not. Even though each participant had difficulty prioritizing variables he or she identified, most variables and assumptions identified appeared in at least one participant's representation. Taken together, these observations refute the idea that participants were unable to transfer mathematical knowledge to a novel problem situation. Instead, the evidence highlights how students' knowledge contributes to the contrast between student work and intended solutions in ways that parallel tensions which arise for those who wish to teach with modelling in the classroom. In particular, participants prioritized variables and assumptions that would preserve precision. Each participant prioritized different variables and assumptions, which were amenable to different mathematical content or representations. It was therefore uncommon that two participants starting from differing sets of initial variables and assumptions produced the same model, let alone the model of the intended solution which was tied to curriculum content goals. On the surface it would seem that letting students freely and authentically engage in modelling, even on routine or simple word problems, is incompatible with meeting the curricular goals a teacher might use these tasks for. Moreover, managing a classroom full of distinct solutions seems daunting, a tension that has been reported before (e.g., Chan 2013; Tan and Ang 2013). Concluding that student models are incorrect because they do not match the intended solutions or use curricular mathematics implicitly assumes that the intended solution is correct. It assumes that many of the variables and assumptions important to the participants should be neglected or assumed constant. But such assumptions cannot always be justified. For example, in real life the (average) speed of the Letter Carrier will be slower if she chose the second option or stops at more mailboxes. The many crossings require her to change direction more often and also to check whether she can safely cross the road. If there is a lot of traffic, she may not cross the road at all until she arrives at a pedestrian crossing. In the Empire State Building Problem, the door opening and closing speeds could be conceptualized as a constant that affects time to ascend the building but would not vary from trip to trip, unless there were more or fewer people entering and exiting. Yet the idea that only the potential distance travelled by the mail carrier or the elevator or the number of hours worked should be considered and that "all other things are equal" is an implicit assumption. These assumptions reveal the mathematical structures aligned with curricular content and representations and were not adopted unproblematically for the participants exactly because of their lived experiences, for example, waiting to cross the road safely. Such choices simplify the problem situation in order to make it fit the target mathematics. However, student success in using mathematics to model real world situations is tied to their ability to see a correspondence between the behaviour of the system to be modelled and its potential mathematisation (Camacho-Machín and Guerrero-Ortiz 2015). Evidence presented here supports the claim that students desire that the mathematical model accurately reflects their lived experiences and empirical observations. This desire can create tension with the conventional simplifications suggested by the intended solutions. Conventional simplifying choices may seem arbitrary to students and contradict what they know to be true about the world. However, the preference for conventional assumptions that target curricular mathematics amounts to just that: preference. Thus intended solutions are correct insofar as they are privileged above other models. Part of the tension that arises when using modelling as a vehicle to foster students' engagement with mathematics content (Julie and Mudaly 2007) is between the intended solution and students' ideas. When student work does not align with the intended solution, it is natural to interpret the student's work as "incorrect." Another common response is to disregard curricular tasks as avenues developing modelling skills. An actor-oriented perspective offers a middle ground. First, students do transfer mathematical and real-world knowledge to the novel situation described in the task situation (Lobato 2006). Second, the intended solution is not the correct model, it may simply be a convenient, conventional, or curricular one. From this perspective, it is possible to predict what variables and assumptions students might suggest when allowing them time and space to work authentically on such problems. The participants in this study selected variables and assumptions that would increase their models' precision relative to their lived experiences with the task situations. This interpretation shifts the locus of support to helping students prioritize those which can be modelled using either the mathematics they know or the intended mathematics. To meet the latter goal, it would be necessary to connect the student variables and assumptions to those in the intended solution. For example, making explicit that certain quantities adhere to conventions (e.g. assuming that elevator speed is constant), not because it is the "correct" assumption but because the assumption makes the problem amenable to a particular mathematical analysis which, in turn, provides insight into the problem. Other examples include variables like number of doublings in the Water Lilies Problem, which can be seen as a proxy for the intended variable time elapsed. Variables such as number of mailboxes, distance between mailboxes, variation in mailbox placement, and number of times the street was crossed can all be seen as proxies for length of the street. Greer (1997) asserted that "doing mathematics should be relatable to the experiential worlds of the pupils, and consistent with a sense-making disposition" (p. 306). The actor-oriented perspective offers a path toward Greer's ideals by illuminating the rationality of the participants' choices. The interview methodology allowed for close examination of participants' responses but the small sample size and laboratory setting of this study raise questions about the situativity not only of the participants' knowledge but also its analysis. That is, the findings were observable exactly because participants were free to identify, prioritize, and mathematize their own variables and assumptions without the imposition of the intended mathematics. Furthermore, the theory and methodology privilege student work and questions about what facilitator competencies might be necessary to bridge intended solutions to student thinking remain unanswered. Hypotheses are found already in the literature. For example, supporting student modelling processes will draw on skill sets like listening (Doerr 2006;Manouchehri and Lewis 2017) scaffolding (Schukajlow et al. 2015;Stender and Kaiser 2015), and attending to student validating and metacognition (Czocher 2014;Goos et al. 2002;Stillman and Galbraith 1998). The actor-oriented theory of transfer, and by extension a transactional view, applied to modelling, would be a useful perspective for exploring the viability of these conjectures because it views transfer as distributed across experiences, situations, and discourses among people (Danish et al. 2017;Jornet et al. 2016;Lobato 2012). Limitations, Future Directions and Recommendations In conclusion, the issue is not that students fail to transfer (or suppress) their real world or mathematical knowledge or that tasks with intended solutions are too inauthentic to foster modelling skills. Students do engage sensibly in these problems and their willingness to engage in curricular tasks needs to be nurtured rather than discouraged. The path forward is to find ways to lead students to mathematics content that allows them to model the world as they see it, rather than constraining them to see the world as curricular mathematics allows. Part of learning modelling as a practice is learning the conventions about which variables or conditions can acceptably be ignored and under what conditions; but that is only part. Being explicit about the conventions and connecting the conventional decisions to the students' natural ways of thinking may help the facilitator and the student develop a shared understanding of the real model, how it was chosen, and why. It might not be enough to show students that some considerations can be ignored (or variables replaced with constants) but rather there is a need to explore justifications for why this is so. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
v3-fos-license
2019-11-23T14:02:39.561Z
2019-11-22T00:00:00.000
208230169
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1002/mgg3.1031", "pdf_hash": "f15a4b3b734a0388da7d24d8afaaaa932c7373ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41650", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "9f3e9859fdf6b7ce22e9d76caebbac77030e5946", "year": 2019 }
pes2o/s2orc
A new frameshift mutation in L1CAM producing X‐linked hydrocephalus Abstract Background X‐linked hydrocephalus (XLH), characterized by mental retardation and bilateral adducted thumbs, often come out to be a genetic disorder of L1CAM. It codes the protein L1 cell adhesion molecule (L1CAM), playing a crucial role in the development of the nervous system. The objective of the study was to report a new disease‐causing mutation site of L1CAM, and gain further insight into the pathophysiology of hydrocephalus. Methods We collect the samples of a couple and their second hydrocephalic fetus. Then, the whole‐exome sequencing and in‐depth mutation analysis were performed. Results The variant c.2491delG (p.V831fs), located in the exon 19 of L1CAM (chrX:153131214), could damage the L1CAM function by producing a frameshift in the translation of fibronectin type‐III of L1CAM. Conclusion We identified a novel disease‐causing mutation in L1CAM for the first time, which further confirmed L1CAM as a gene underlying XLH cases. | INTRODUCTION Hydrocephalus, the abnormal accumulation of intracranial cerebrospinal fluid (CSF), is a common malformation of fetuses. Accompanied by other structural brain lesions, it affects approximately one in every 1,000 children born (Tully & Dobyns, 2014;Warf, 2005). The pathogenesis of this process remains to be fully elucidated; nonetheless, a few points are established. The diagnosis of hydrocephalus is mainly based on the result of ultrasound detection, which is neither precise nor timely. Thus, the use of genetic sequencing is increasingly popular and important in recent years. The purpose of the study was to report a new disease-causing mutation site of L1CAM, making a small step forward in the pathogenesis of hydrocephalus. | Ethical compliance The research was approved by the Institutional Committee for the Protection of Human Subjects (Institutional Review Board of Sichuan Provincial Hospital for Women and Children), and all patients signed the informed consent. | Sample collection The blood samples of the parents and the tissue of their fetus were collected and kept at −80°C. | Mutation analysis Genomic DNA was extracted from tissue and blood samples according to standard protocols. Applied Biosystems 3730xl DNA Analyzer was used to sequence the result of PCR amplification. Then, we found out the sites that need to be sequenced on the peak of Sanger sequencing, specific primers were designed according to the site information on USCS via Prime Primer 5, and to confirm whether they have variation. The library was further constructed by using Roche SeqCap EZ MedExome Enrichment kit and sequenced on an Illumina HiSeq X machine. Raw reads were mapped to the human reference genome GRCh37/ hg19 using BWA (v0.7.12-r1039) (Li & Durbin, 2009), and the SAM files were transformed to BAM files and sorted by using SAMtools (v0.1.18). Then Picard v1.134 (http:// broad insti tute.github.io/picar d/) was used to mark duplicate reads. Variants were called by GenomeAnalysisTK (GATK v3.7) (McKenna et al., 2010) and annotated by ANNOVAR (2016Jul16 version). The Exome Aggregation Consortium (ExAC Version 0.3.1), Genomes 1,000 Project, ESP6500, and other public database were used to filter the variants. The candidate pathogenic mutations were verified by Sanger sequencing. | RESULTS A 25-year-old woman was referred to our department for having one spontaneous abortion and two voluntary terminations of pregnancy due to fetal hydrocephalus. Blood samples of this couple and tissue of the last hydrocephalic fetus were collected. The familial pedigree was consistent with X-linked recessive inheritance (Figure 1). During the first pregnancy, a natural abortion happened around 11 weeks of gestation. As for the second pregnancy, a fetal ultrasound scan at 24+ weeks of gestation proved the presence of hydrocephalus, and the woman required an interruption of pregnancy. The third pregnancy was similar to the second one. The ultrasound scan evaluation at 25 + 4 gestation weeks revealed the bilateral ventriculomegaly with dilatation of the third ventricle and polyhydramnios. The magnetic resonance imaging (MRI) further proved the presence of callosal agenesis and lissencephaly (Figure 2). After collecting the samples of the couple and the third fetus, we used the whole-exome sequencing and in-depth mutation analysis to find the potential causes. The variant was confirmed in DNA extracted from the fetus and mother (Figure 3). Potential variants were called by Genome Analysis TK, public databases were used to filter the variants, and the effects of these variants were annotated by ANNOVAR programs. The proband was found to be hemizygous for L1CAM with a mutation NM_000425.5:c.2491del:p.(Val831Serfs*20) and the mother was found to be heterozygote. Computational analysis predicted that this was a frameshift mutation, located in the exon 19 of L1CAM (chrX:153131214), coding the fibronectin type-III of L1CAM. The mutation will lead to the translation errors of amino acid and an early translation termination, belonging to the loss-of-function mutation. Moreover, the ClinGen haploinsufficiency score and pLI in ExAC of L1CAM was 3 and 1, respectively, (https ://www.ncbi.nlm.nih. gov/proje cts/dbvar/ cling en/cling en_gene.cgi?sym=L1CAM &subje ct=, and http://exac.broad insti tute.org/gene/ENSG0 00001 98910 ), suggesting a strong relationship between the loss-of-function mutation and disease. In addition, by now, this mutation had not been reported in gnomAD or 1,000 Genomes Project yet. Besides, according to OMIM, diseases related to L1CAM are X-linked recessive, and the phenotype was consistent with that of the proband. The genetic pattern was consistently with "the L1 syndrome." According to the guide of ACMG (American society of medical genetics and genomics), with the evidence of a PVS1, a PM2, and a PP4, it was an X-linked and pathogenic mutation (Figure 3). | DISCUSSION Hydrocephalus, including X-linked hydrocephalus (XLH), often comes out to be a genetic disorder, characterized by mental retardation and bilateral adducted thumbs (Okamoto et al., 2004). The features of XLH include the enlargement of both third and lateral ventricles, agenesia of corpus callosum, atrophy of corticospinal descending pathways in the pons and medulla, and spasticity of upper and mostly lower limbs (Itoh & Fushiki, 2015;Tonosaki et al., 2014). Despite of its unclear pathogenesis, most cases reported the strong link between mutations of L1CAM and XLH. According to Vos et al.'s report (Vos et al., 2010), 85% of hydrocephalus fetus were facing L1CAM mutation when they had three or more L1 syndrome-related morphological alterations, and more than one affected relative. Ferese et al. (2016) reported that a splicing mutation (NM_000425.4:c.1267 + 5delG) in L1CAM, which produced the skipping of exon 10, could result in hydrocephalus. In Liebau's study (Liebau, Gal, Superti-Furga, Omran, & Pohl, 2007), a mutation at the beginning of intron 18 of L1CAM was related to the agenesis of corpus callosum, adducted thumbs, hydrocephalus, and mental retardation. Hübner et al. (2004) found out that a mutation of L1CAM in two unrelated families resulted in a frame shift due to insertion of the first 10 bp of intron 5 in the mature mRNA of L1CAM, leading to a largely truncated protein. In our study, we found a NM_000425.5:c.2491del:p.(Val831Serfs*20) variant, located in the exon 19 of L1CAM (chrX:153131214), that could damage the L1CAM function by producing a frameshift in the translation of fibronectin type-III of L1CAM, resulting in the bilateral ventriculomegaly with dilatation of the third ventricle, polyhydramnios, callosal agenesis, and lissencephaly. In summary, we identified a novel XLH-causing mutation NM_000425.5:c.
v3-fos-license
2018-04-03T04:10:48.982Z
1995-10-20T00:00:00.000
37999157
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/270/42/25286.full.pdf", "pdf_hash": "7ac1fbfe125eeb36991aabde8c8ffe9fc7cfb385", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41651", "s2fieldsofstudy": [ "Biology" ], "sha1": "97bd1ff34753f31d7bcfc10cbdf6b6d5e735a20f", "year": 1995 }
pes2o/s2orc
The interaction of Escherichia coli topoisomerase IV with DNA. The two type II topoisomerases in Escherichia coli, DNA gyrase and topoisomerase (Topo) IV, share considerable amino acid sequence similarity, yet they have distinctive topoisomerization activities. Only DNA gyrase can supercoil relaxed DNA, whereas during oriC DNA replication in vitro, only Topo IV can support the final stages of replication, processing of the late intermediate and decatenation of the daughter molecules. In order to develop an understanding for the basis of the differential activities of these two enzymes, we have initiated a characterization of Topo IV binding to DNA. We find that unlike gyrase, Topo IV neither constrains DNA in a positive supercoil when it binds nor protects a 150-base pair region of DNA from digestion with micro-coccal nuclease. Consistent with this, DNase I footprinting experiments showed that Topo IV protected a 34-base pair region roughly centered about the topoisomerase-induced cleavage site. In addition, Topo IV preferentially bound supercoiled rather than relaxed DNA. Thus, the DNA binding characteristics of Topo IV are more akin to those of the type II eukaryotic enzymes rather than those of its prokaryotic partner. The two type II topoisomerases in Escherichia coli, DNA gyrase and topoisomerase (Topo) IV, share considerable amino acid sequence similarity, yet they have distinctive topoisomerization activities. Only DNA gyrase can supercoil relaxed DNA, whereas during oriC DNA replication in vitro, only Topo IV can support the final stages of replication, processing of the late intermediate and decatenation of the daughter molecules. In order to develop an understanding for the basis of the differential activities of these two enzymes, we have initiated a characterization of Topo IV binding to DNA. We find that unlike gyrase, Topo IV neither constrains DNA in a positive supercoil when it binds nor protects a 150-base pair region of DNA from digestion with micrococcal nuclease. Consistent with this, DNase I footprinting experiments showed that Topo IV protected a 34-base pair region roughly centered about the topoisomerase-induced cleavage site. In addition, Topo IV preferentially bound supercoiled rather than relaxed DNA. Thus, the DNA binding characteristics of Topo IV are more akin to those of the type II eukaryotic enzymes rather than those of its prokaryotic partner. Four DNA topoisomerases (Topos) 1 have been identified in Escherichia coli. Topo I (encoded by topA; Refs. 1 and 2) and Topo III (encoded by topB; Ref. 3) are type I enzymes. DNA gyrase and Topo IV are type II enzymes. Both gyrase and Topo IV are composed of two subunits: GyrA and GyrB (4,5) and ParC and ParE (6), respectively. DNA sequence analysis shows that parC and parE have significant homology with gyrA and gyrB, respectively. Whereas these two topoisomerases share some common biochemical features, they can catalyze different reactions in vitro (7,8) and appear to have different functions in vivo (9), although both enzymes are essential for cell survival. In an ATP-dependent fashion, gyrase can supercoil relaxed DNA, catenate and decatenate DNA rings, knot and unknot circular DNA, and convert positive supercoils directly to negative ones. Gyrase will relax negatively supercoiled DNA in the absence of ATP (10). All Topo IV-catalyzed reactions require ATP. Topo IV will relax both positive and negative supercoils, knot and unknot DNA, and decatenate DNA rings (7,11). During DNA replication in vitro, only Topo IV is capable of supporting the terminal stages of replication, processing of the late intermediate 2 and decatenation of the daughter molecules (8). Both gyrase and Topo IV can support nascent chain elongation during theta-type DNA replication in vitro (12). Genetic analysis has suggested that both Topo IV and gyrase are involved in chromosome decatenation (6,13). This was supported by the study of Bliska and Cozzarelli (14) showing that gyrase was responsible for unlinking catenanes produced as a result of a recombination event. On the other hand, Adams et al. (9) showed that pBR322 replication catenanes accumulated at the nonpermissive temperature only in parC or parE strains, not in gyrA or gyrB strains. In order to supercoil DNA, gyrase must be able to pass strands in one direction; otherwise only relaxation would occur. The mechanisms of gyrase-catalyzed reactions and the interaction of the enzyme with DNA has been studied extensively (15). When bound to DNA, gyrase constrains about 150 bp of DNA about itself in a positive toroidal supercoil (16 -19). This is consistent with the results of both nuclease protection experiments (17) and DNase I footprinting experiments (18,19). This ability of gyrase to order DNA locally with respect to the site of DNA cleavage during strand passage likely accounts for its supercoiling ability. Both gyrase and Topo IV are targets for the quinolone and coumarin antibiotics (4,5,11,20), yet in E. coli, resistance to these antibiotics arises only via mutation of the gyrase genes (21)(22)(23)(24). Thus, although gyrase and Topo IV seem quite similar, their cellular functions are different. We have initiated an investigation into the mechanisms of the Topo IV topoisomerization activities in order to illuminate the structural basis for the differences in gyrase and Topo IV function. We find that unlike gyrase, Topo IV neither wraps DNA about itself nor distorts the path of the helix significantly on binding. Instead, the enzyme appears to bind a region of 34 bp centered about the cleavage site. Again, unlike gyrase, Topo IV prefers to bind supercoiled rather than relaxed DNA. MATERIALS AND METHODS Preparation of Form II DNA-Form I pBSM13 DNA (80 g, Stratagene) was treated with DNase I (0.13 g, Pharmacia Biotech Inc.) for 30 min at 30°C in a reaction mixture (400 l) containing 5 mM Tris-HCl (pH 7.6 at 30°C), 125 mM NaCl, 20 mM MgCl 2 , 100 g/ml bovine serum albumin, and 1 g/ml ethidium bromide. The reaction was stopped by the addition of EDTA to 50 mM. DNA was recovered by ethanol precipitation after phenol extraction and resuspended in 10 mM Tris-HCl (pH 7.6 at 4°C) and 1 mM EDTA. Topoisomerase-induced Constraint of Supercoils in DNA-Reaction mixtures (30 l) containing 50 mM Tris-HCl (pH 7.8 at 23°C), 10 mM MgCl 2 , 10 mM dithiothreitol, 26 M NAD, 10 ng/ml tRNA, 25 g/ml bovine serum albumin, form II pBSM13 DNA (0.4 g), and the indicated amounts of either gyrase or Topo IV were incubated at 23°C for 20 min. E. coli DNA ligase (8 ng) was then added, and the reactions were continued for 1 h. EDTA, NaCl, SDS, and proteinase K were then added to 25 mM, 200 mM, 0.07%, and 3 g/ml, respectively, and the incubation was continued for an additional 2 h. The DNA samples were then electrophoresed through 1.5% vertical agarose gels at 4 V/cm for 14 h in the presence of 13 g/ml chloroquine phosphate using 50 mM Tris-HCl (pH 7.9 at 23°C), 40 mM NaOAc, and 1 mM EDTA as the electrophoresis buffer. The DNA was visualized by staining with ethidium bromide. * These studies were supported by Grant GM 34558 from the National Institutes of Health and Grant NP 865 from the American Cancer Society. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 The abbreviations used are: Topo, topoisomerase; bp, base pair; Photonegatives of the gel were scanned using a Millipore BioImage Densitometer to determine the distribution of topoisomers. Micrococcal Nuclease Protection-Reaction mixtures (50 l) containing 50 mM Tris-HCl (pH 7.5 at 30°C), 20 mM MgCl 2 , 1 mM CaCl 2 , 2 mM dithiothreitol, 0.1 mM EDTA, 50 g/ml bovine serum albumin, pBSM13 DNA labeled with 32 P by nick translation (60 fmol), and the indicated amounts of either gyrase or Topo IV were incubated at 30°C for 20 min. The indicated amounts of micrococcal nuclease (Boehringer Mannheim) were then added, and the incubation was continued for an additional 10 min. EDTA, SDS, and proteinase K were then added to 50 mM, 0.2%, and 1 g/ml, respectively, and the incubation was continued for 1 h. DNA products were then analyzed by electrophoresis through 7% polyacrylamide gels (29:1, acrylamide to bisacrylamide) at 9 V/cm for 3.25 h using 89 mM Tris borate, 89 mM boric acid, and 1 mM EDTA as the electrophoresis buffer. Gels were dried onto Whatman 3 M paper and visualized by autoradiography. DNase I Footprinting-The DNA substrate was prepared using the polymerase chain reaction (PCR) with plasmid pTH101 (25) as the template and the oligonucleotide primers KanXho (5Ј-TCGAGGCCGC-GATTAAATTCCAAC-3Ј) and KanSma (5Ј-GGGGATCGCAGTGGT-GAGTAACCA-3Ј), one of which was 5Ј 32 P-labeled using polynucleotide kinase. The resulting 276-bp DNA fragment, derived from sequences between the XhoI and SmaI cleavage sites in the kanamycin resistance gene on pTH101, spans a strong Topo IV cleavage site. The PCR products were gel purified before use. DNase I footprinting reaction mixtures (20 l) containing the indicated amount of DNA fragment, 40 mM Tris-HCl (pH 7.6 at 30°C), 6 mM MgCl 2 , 20 mM KCl, 2 mM dithiothreitol, and the indicated amounts of Topo IV were incubated at 30°C for 3 min. The indicated amounts of DNase I (Pharmacia) were then added, and the incubation was continued for 30 s. A stop solution (20 l) containing 100 mM EDTA and 125 g/ml pBSM13 DNA was then added to terminate the reaction. The DNA was recovered by ethanol precipitation after extraction with a phenol-CHCl 3 mixture (1:1), resuspended in 6 l of loading buffer (95% formamide, 20 mM EDTA, 0.05% bromphenol blue, and 0.05% xylene cyanol ff) and electrophoresed through a 7% sequencing gel. The gel was dried onto Whatman 3 M paper and visualized by autoradiography. Dideoxy DNA sequence ladders were prepared using the labeled KanXho and KanSma oligonucleotides as primers and pTH101 DNA as template. DNA Binding Assay-pBSM13 and [ 3 H]pBR322 form I DNA were purified as described by Marians et al. (26). [ 3 H]pBR322 form IЈ and form III (linear) DNAs were prepared by treatment of form I DNA with E. coli Topo I and the EcoRI restriction endonuclease, respectively. EcoRI-linearized pBSM13 DNA was 32 P-labeled using polynucleotide kinase for the experiment shown in Fig. 5. DNA binding reaction mixtures (20 l) containing the indicated amount of DNA substrate, 50 mM Tris-HCl (pH 7.6 at 30°C), 20 mM KCl, 10 mM MgCl 2 , 2 mM dithiothreitol, 50 g/ml bovine serum albumin, and the indicated amounts of either gyrase or Topo IV were incubated at 30°C for 20 min. Washing buffer (1 ml of 25 mM Tris-HCl, pH 7.5, 20 mM KCl, 5 mM MgCl 2 , 1 mM EDTA, and 10 mM ␤-mercaptoethanol) was then added, and the reactions were passed through nitrocellulose filters (Millipore) at 1 ml/min. The filters were then washed three times with 1 ml of washing buffer. The filters were dried, and the radioactivity retained was determined by liquid scintillation spectrometry. RESULTS Topo IV Binds DNA Differently Than DNA Gyrase-DNA gyrase binds DNA in a very distinctive manner. The enzyme wraps roughly 150 bp of DNA about itself in a positive toroidal supercoil (15). Three different types of assays were used to detect this. These were (i) assessment of the alteration in the topology of relaxed DNA upon gyrase binding (16), (ii) assessment of the extent of DNA protected from micrococcal nuclease digestion as a result of gyrase binding (17), and (iii) DNase I footprinting (17)(18)(19). We used these three assays to probe the mode of binding of Topo IV to DNA. In order to assess whether Topo IV alters the path of the helix in a significant fashion when it is bound to DNA, Topo IV was bound to singly nicked form II DNA. The nick was sealed with DNA ligase, and the DNA was deproteinized and analyzed by electrophoresis through agarose gels. If binding of the enzyme constrains supercoils, they will become locked into the DNA upon covalent closure of the nick. They can then be observed easily by gel electrophoresis. As reported previously for gyrase, binding of this enzyme to DNA resulted in the induction of supercoils after closure of the nick (Fig. 1, lanes 1-3). The photonegative of the stained gel was traced densitometrically to determine the shift in the position, compared with that in the absence of gyrase, of the center of the distribution of the topoisomers formed in the presence of gyrase. In this way, we could calculate that 0.5 superhelical turns were introduced to the DNA per bound gyrase tetramer. This is similar to the stoichiometry determined previously (16). Although not shown here, it has been determined previously that the superhelical turns induced by gyrase binding are positive (16). The binding of Topo IV to the DNA resulted in a slight shift in the pattern of topoisomers toward a more positive distribution. This corresponded to the induction of 0.06 superhelical turns/Topo IV tetramer (Fig. 1, lanes 4 -8). This was the case even at Topo IV to DNA ratios 4-fold higher than the ratio where gyrase-induced supercoiling was very obvious. Thus, it seemed unlikely that Topo IV was wrapping DNA about itself as gyrase does. Instead, it is possible that Topo IV unwinds duplex DNA somewhat upon binding. To confirm that Topo IV was not wrapping DNA about itself, we determined the extent of DNA protected from micrococcal nuclease digestion by Topo IV binding. As established originally for nucleosomes (27), micrococcal nuclease will cut only in the spacer region between bound proteins. Thus, if a protein wraps DNA about itself, it should protect a significant region of the DNA from digestion by the nuclease. This is clearly observed for DNA gyrase. As reported previously (17), under protein to DNA ratios equivalent to one gyrase tetramer/200 bp of DNA, gyrase protected DNA in the size range of 110 -160 bp from micrococcal nuclease digestion (Fig. 2A, lane 2). This is consistent with the ability of gyrase to wrap DNA about itself. No such protection was evident at a similar ratio of Topo IV to DNA (Fig. 2A, lane 3). At 5-fold higher ratios of topoisomerase to DNA, the same pattern of protected DNA was evident for gyrase (Fig. 2B, lane 2), whereas Topo IV protected a wide range of DNA varying in size between the limit products of the micrococcal nuclease digestion to about 700 bp (Fig. 2B, lane 3). The wide size range of DNA protected by Topo IV under these conditions is most 1 and 4), DNA gyrase (lanes 2 and 3), or Topo IV (lanes 5-8) for 20 min at 23°C. E. coli DNA ligase was then added, and the reactions were continued for 1 h. Reaction products were then processed as described under "Materials and Methods" and analyzed by electrophoresis through a 1.5% agarose gel containing 14 g/ml chloroquine phosphate. likely indicative of the binding to the DNA of multiple Topo IV tetramers close enough together to exclude access of micrococcal nuclease to the DNA. In order to determine how large a region of DNA was bound by Topo IV, we performed DNase I footprinting. The substrate was a 276-bp DNA fragment made by PCR using plasmid pTH101 as a template. We had determined that this region of DNA had one major Topo IV cleavage site that could be observed in the absence of quinolones (data not shown). By 5Ј 32 P labeling each primer separately, we were able to easily observe the Topo IV footprint on each DNA strand. The results of the DNase I footprinting analysis (Fig. 3) showed that like all known type II topoisomerases, the Topo IV cleavage sites on the top (Fig. 3D) and the bottom (Fig. 3D) strands were staggered by 4 nt (Fig. 3C). Topo IV protected from DNase I digestion about 34 nt of DNA on each strand roughly centered about the cleavage site (Figs. 3, A-C). Because the cleavage site is staggered, this results in slightly asymmetric protection of the duplex from nuclease digestion. Thus, it seems that the mode of Topo IV binding to DNA is distinct from that of gyrase and is similar to that of the eukaryotic type II topoisomerases (28,29). Topo IV Binds Preferentially to Supercoiled DNA-The studies described in the previous section indicated that gyrase and Topo IV bound DNA differently. Gyrase binds preferentially to relaxed rather than supercoiled DNA (30), consistent with supercoiling being its primary function. Because Topo IV has no supercoiling activity, it seemed likely that the enzyme would bind supercoiled DNA preferentially. This was investigated using nitrocellulose filter binding assays. Topo IV binding to supercoiled (form I), relaxed (form IЈ), and linear (form III) pBR322 DNAs was compared (Fig. 4). The form IЈ DNA was prepared by treatment of form I DNA with E. coli Topo I. The resultant preparation contained no detectable form I DNA and about 5% form II (nicked) DNA. The form III DNA was prepared by digestion of form I DNA with the EcoRI restriction endonuclease. K D was calculated according to Riggs et al. (31). Topo IV bound to form I, IЈ, and III DNAs with K D values of 0.6 nM, 3.3. nM, and 9.3 nM, respectively. Because these DNAs were topological isomers, the different affinities of Topo IV for them can only be attributed to their different topological states. Thus, as predicted, unlike gyrase, Topo IV clearly bound supercoiled DNA preferentially to relaxed DNA. This was confirmed by a competition binding experiment. Topo IV binding to form III 5Ј [ 32 P]pBSM13 DNA was competed by either unlabeled form I or form III pBSM13 DNA (Fig. 5A). Calculation of the amount of competitor needed to reduce binding to the [ 32 P]-labeled DNA by 50% (Fig. 5, B and C) showed that an 18-fold higher molar excess of linear compared with supercoiled DNA was required. This was in good agreement with the nearly 16-fold difference in K D determined for binding of Topo IV to form I and III DNAs (Fig. 4). Whereas it was clear that Topo IV bound supercoiled DNA better than relaxed DNA, it also seemed that of the two types of relaxed DNA used in the binding experiments, Topo IV bound form IЈ DNA roughly 3-fold better that form III DNA. Because the only difference between these two DNA forms is that the latter has ends and the former does not, we considered whether the difference in binding affinities could be accounted for as a result of Topo IV molecules sliding off the linear form. A similar explanation was raised to account for the reduced binding affinity of the Drosophila type II enzyme to form III DNA compared with form I DNA (32). Experiments from Wang's lab (33,34) suggest that the eukaryotic type II topoisomerases are possessed of an annular DNA binding site, and Sekiguchi and Shuman (35) have shown that the vaccinia type I enzyme binds DNA circumferentially. Thus, we prepared [ 35 S]Topo IV and measured its binding to form IЈ and form III DNAs by gel filtration. At very low concentrations of Topo IV (Ͻ5 nM), we observed 50% more Topo IV bound to form IЈ compared with form III. However, this difference was lost at higher concentrations (data not shown). Thus, whereas sliding of Topo IV off of form III DNA may account for some of the observed binding differences, it cannot account for the full effect. It is of course possible that the EcoRI cleavage used to generate the form III DNA disrupted a high affinity biding site. DISCUSSION E. coli has two type II topoisomerases, DNA gyrase and the recently discovered Topo IV. Even though these enzymes share considerable amino acid sequence similarity (6), they support different reactions during DNA replication in vitro (8) and appear to behave distinctively in vivo (9). Both enzymes can support nascent chain elongation during oriC DNA replication reconstituted in vitro with purified proteins (8,12), although only Topo IV can support the terminal stages of replication, processing of the late intermediate 2 and decatenation of the linked daughter molecules (8). Gyrase, but not Topo IV, has been implicated as the enzyme responsible for supporting chain elongation in vivo (36), although conclusions based on the effects on DNA replication of the quinolone antibiotics (36) must now be considered questionable because both gyrase (4,5) and Topo IV (7, 11) are sensitive to these drugs. Mutations that display a par phenotype can be mapped to The data are presented as the percentage of binding of the 32 P-labeled linear DNA versus competitor. B and C, the molar excess of competitor required to compete 50% of the binding to 32 P-labeled linear DNA was calculated for the linear and supercoiled competitors, respectively. r o , the percentage of DNA bound in the absence of competitor; r, DNA bound in the presence of competitor. both the gyrase (37,38) and Topo IV genes (6, 39 -41) and incompletely segregated nucleoids have been observed in gyrB mutant strains at the nonpermissive temperature (13). On the other hand, Adams et al. (9) demonstrated that catenated pBR322 daughter molecules arise at the nonpermissive temperature only in parC and parE strains, not in gyrA or gyrB strains. In order to better appreciate the basis for the differential action of these two topoisomerases, we have investigated the interaction between Topo IV and DNA. Binding of gyrase to DNA is distinctive. The enzyme wraps roughly 150 bp of DNA about itself in a positive toroidal supercoil (15). It has been proposed that this ordering of the DNA across the surface of the DNA cleavage site facilitates the vectorial strand passage required for supercoiling (42). DNA bound to Topo IV in a manner more reflective of a eukaryotic type II topoisomerase than of gyrase. Topo IV protected a small region of 34 bp from attack by DNase I when bound to DNA. Given that the Stokes radius of Topo IV is 65 Å (11), it is highly unlikely that the enzyme wraps DNA about itself. This is supported by the observation that the binding of Topo IV to DNA followed by the subsequent closure of the DNA into a ring did not result in the induction of positive supercoils, as was the case for gyrase (16). Thus, the size of the binding site of Topo IV on DNA is similar to that of the eukaryotic type II enzyme, which has an identical Stokes radius and protects 25-28 bp of DNA from nuclease digestion (28). In spite of the different mode of binding to DNA, Topo IV and gyrase binding sites appear to be dictated by similar sequence features because they overlap considerably (11). Perhaps the different modes of DNA binding accounts for the observed difference in site preference (11). Filter binding studies showed that Topo IV bound supercoiled DNA preferentially compared with relaxed. This is opposite to the preference shown by DNA gyrase (30) but similar to the preference shown by the eukaryotic enzyme (32) and is in keeping with the inability of Topo IV to supercoil DNA. Thus, the net result of the interaction between closed circular DNA and Topo IV is the removal of turns of the duplex about itself. This, not surprisingly, has apparently resulted in the evolution of an enzyme that preferentially binds its substrate. The difference in binding site size between Topo IV and gyrase and the preferential binding of Topo IV to superhelical DNA helps to explain their differential action during the terminal stages of oriC plasmid DNA replication in vitro. The inability of gyrase to process the late intermediate or to decatenate the linked daughter molecules may derive partially from its large DNA binding site and preference for relaxed DNA. Gyrase is probably excluded from binding ahead of the replication forks in the late intermediate where there is only about 200 bp of unreplicated parental DNA available. This exclusionary feature would be considerably less severe for Topo IV. Likewise, because the intermolecular linkages between catenated daughter molecules are similar to supercoils in that they impart writhe to the helix (43,44), Topo IV binding to these replication products should be considerably more stable than that of DNA gyrase. Interestingly, the difference in substrate binding preference exhibited between Topo IV and gyrase confounds an explana-tion for the observation that in laboratory strains of E. coli, resistance to nalidixic acid maps only to gyrA (22,23) and gyrB (24). Because the chromosome is supercoiled, one might expect Topo IV rather than gyrase to be bound preferentially and to serve as the primary target for the drug. Thus, it is clear that considerable genetic and biochemical analysis of these two topoisomerases is required in order to understand the complex interactions between them and their role in the cell.
v3-fos-license
2021-09-05T06:18:03.721Z
2021-09-03T00:00:00.000
237411756
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-96715-8.pdf", "pdf_hash": "0a2dcc78df468acb23765a5e5616006100196cbb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41652", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b2e11f62a4b84fe652f8bb92f0740f8191390e43", "year": 2021 }
pes2o/s2orc
Optical coherence tomography angiography (OCTA) of retinal vasculature in patients with post fever retinitis: a qualitative and quantitative analysis Post fever retinitis is a heterogenous entity that is seen 2–4 weeks after a systemic febrile illness in an immunocompetent individual. It may occur following bacterial, viruses, or protozoal infection. Optical coherence angiography (OCTA) is a newer non-invasive modality that is an alternative to fundus fluorescein angiography to image the retinal microvasculature. We hereby describe the vascular changes during the acute phase of post fever retinitis on OCTA. Imaging on OCTA was done for all patients with post fever retinitis at presentation with 3 × 3 mm and 8 × 8 mm scans centred on the macula and corresponding enface optical coherence tomography (OCT) scans obtained. A qualitative and quantitative analysis was done for all images. 46 eyes of 33 patients were included in the study. Salient features noted were changes in the superficial (SCP) and deep capillary plexus (DCP) with capillary rarefaction and irregularity of larger vessels in the SCP. The DCP had more capillary rarefaction when compared to the SCP. The foveal avascular zone (FAZ) was altered with an irregular perifoveal network. Our series of post fever retinitis describes the salient vascular features on OCTA. Although the presumed aetiology was different in all our patients, they developed similar changes on OCTA. While OCTA is not useful if there is gross macular oedema, the altered FAZ can be indicative of macular ischemia. www.nature.com/scientificreports/ be obtained within seconds, provides accurate size and localization information and delineates both the retinal and choroidal vasculature. Disadvantages are its limited field of view, inability to demonstrate leakage, increased potential for artifacts (blinks, movement, vessel ghosting), and inability to detect blood flow below the slowest detectable flow [6][7][8][9][10][11] . Standard enhanced depth imaging (EDI) optical coherence tomography (OCT) is only capable of showing the structure of choroid and choriocapillaris. However, the OCTA, being rapid, non-invasive and repeatable, is useful for the assessment of the foveal avascular zone (FAZ) and microvascular changes along with segmental imaging and evaluation of the superficial capillary plexus (SCP) and deep capillary plexus (DCP) in several retinal vascular diseases [6][7][8][9][10][11] . FAZ and capillary density can be measured at both the SCP and DCP 12 . Despite the lack of standardised protocols for image acquisition and interpretation of image scans, OCTA is widely used for the detection of pathophysiology, early diagnosis, treatment and determination of the progression in patients, especially with vascular pathology 4 . It provides good delineation of the pathology along with volumetric data with the ability to show both structural and blood flow information 5 . It can therefore be vital in understanding the vascular changes in eyes with post fever retinitis. Methods This prospective, observational, cross-sectional study with protocol title "FAVOUR": "Fever Associated Visual Outcome in Uvea and Retina" was approved by the Narayana Nethralaya Hospital Ethics Committee (EC Ref No: C/2018/08/05). The research followed the tenets of the declaration of Helsinki and an informed written consent was obtained from all study subjects. The study included 46 eyes of 33 patients who presented with post fever retinitis between August 2018 and July 2020. All patients underwent imaging at presentation to our tertiary eye center on a spectral domain (SD) OCTA system (ANGIOVUE, OPTOVUE, Inc., Fremont, CA, USA) using the Split Spectrum Amplitude Decorrelation Angiography (SSADA) algorithm to quantify vasculature structure, the FAZ, and the superficial and deep retinal vascular plexus densities, by a single trained operator. Scan areas of 3 × 3 mm and 8 × 8 mm centred on the fovea for imaging both superficial and deep retinal plexus were obtained separately for each patient similar to what we have described earlier 13 . The OCT scanner has a scan speed of 70,000 A-scans per second with 304 A scans per B-scan and 304 B-scans per volume. The axial and transverse resolution of the device is 5 µm and 15 µm, respectively. If the signal strength index was less than 40, the scans were repeated and those with poor signal strength were not included. Tropicamide 1% eye drops was used for dilatation of pupil in all patients. A total of 5 scans were acquired for every patient using an internal fixation target. Only the best quality scans were chosen for analyses. Exclusion criteria for OCTA images were those with motion artefacts, double vessels or undue stretching of vessels. Foveal centre for OCTA images was correlated with OCT scans at the same levels. A parafoveal ring of 1-2.5 mm diameter, proportional to the total area was used for analysis relative to the density of vascular flow. Non flow mode in SCP and DCP was used to measure the FAZ. ANGIOANALYTICS (OPTOVUE, Inc., Fremont, CA, USA) blood vessel measurement was used to measure automated vessel flow density. Segmentation was performed both by automated and manual techniques, especially in case of gross oedema or poor scan features. Local fractal dimension was used to represent the presence of vessels in OCTA scans 13 . Calculation of the ratio of local fractal dimension of each pixel in an OCTA image to the maximum fractal dimension was done as described in earlier 13 . Coloured contour, normalised ratio to provide a pictorial representation of an apparent probability index of the presence of vessel was done. Visual comparison of the normalized ratio map with the OCTA image was used to develop a scoring system. The vessel density was computed as a percentage, by counting all the pixels with a normalized ratio between 0.7 and 1.0 and then dividing by the total number of pixels in the OCTA image 13 . Capillary dropouts is a significant parameter to distinguish between normal and diseased eyes. In this study, capillary dropouts were labeled as "spacing between large vessels" and "spacing between small vessels". Spacing between or around the large vessels with a normalized ratio between 0.0 and 0.3 were considered. Pixels in regions around closely packed small vessels, which may be branching out from a large vessel or surrounding small vessels, with a normalized ratio between 0.3 and 0.7 were termed as "spacing between small vessels" 13 . Gadde et al. have described local fractal dimension to calculate vessel density and FAZ area in a normal healthy Asian Indian population 13 . Quantification of vascular parameters can be affected by projection artefacts (PAs) in the DCP. Govindaswamy et al. 14 have described the methodology to reduce the PAs. The inbuilt software from the instrument allows the users to choose flow and non-flow area on a selected layer. OCTA indices from local fractal analysis differentiate the large and small vessel regions of the non-flow area 13 . Thus we used custom OCTA metrics rather than using inbuilt software of the machine for the quantitative analysis. Deep capillary plexus was found to be the affected initially in most of the retinal vascular disorders. We have previously reported that a significant vascular loss in different grades of diabetic retinopathy at the level of deep vascular plexus after removal of projection artifacts. Before the removal of projection artefacts, this was not apparent with the presence of projection artefacts 2 . Our approach is software-based where a normalized www.nature.com/scientificreports/ cross-correlation between superficial and deep layer was estimated as a scaling factor to subtract the projection artifact. Hence, it can be presumed that irrespective of the instrument used to obtain and the algorithm used to re-construct angio images, the software-based approach would provide comparable results. Conference presentation. Oral presentation at a meeting: Preliminary data at 35th Singapore-Malaysia Joint Meeting in Ophthalmology in conjunction with the 1st Asia-Pacific Ocular Imaging Society Meeting, held from 17 to 19 January 2020 at the Academia, Singapore General Hospital Campus. Informed consent. Written Informed consent was obtained from all study subjects. Results Thirty-three consecutive patients with post fever retinitis (46 eyes, 13 bilateral) in the age 18-73 years (median age 40 years) who underwent OCTA between August 2018 and July 2020 were included in the study (Male:Female-17:16). The presumed etiologies for the post fever retinitis were dengue, rickettsia and typhoid fever based on the serology as illustrated in Table 1. The predominant clinical picture was of multifocal retinitis with macular oedema. Quantitative analysis. Figure 1 shows the pre and post removal of projection artefact (PAs). In the SCP, the large vessel spacing was increased with small vessel spacing similar to a normative data. The DCP was more affected in terms of small and large vessel spacing and the vessel density was also significantly lower. The FAZ was not affected during the acute stage in both the SCP and DCP. Figure 2 shows the fractal analysis after removal of the artefacts. The vascular parameters of small and large vessels in the superficial capillary and deep capillary layers are shown in Table 2. Qualitative analysis. Salient features of active retinitis on OCTA included changes in both SCP and DCP with capillary rarefaction and irregularity of larger vessels in SCP. Pruning of the vessels was noted in SCP and DCP. The FAZ was altered with a broken perifoveal network suggestive of macular ischaemia on 3 × 3 mm scans in patients with active retinitis and areas of retinal thinning on OCT. Capillary rarefaction was better appreciated in the DCP than SCP. Bright hyper-reflective material on OCT was seen corresponding to the areas of capillary rarefaction and artefacts at DCP. The choriocapillaris (CC) layer had a loss of the normal coarse architecture corresponding to areas of retinal oedema on OCT and on the enface image. Both intra retinal and sub retinal oedema had a shadowing effect on the DCP, outer retina (OR) and CC layer causing artefacts thus impeding an accurate measurement of the vascular density and confirmation of the dropout areas. Middle retinal thickening, highly reflective spots (HRS) and hard exudates were seen as bright areas in the DCP and CC and in the OR in some scans. These areas of HRS with middle retinal oedema causing radial striations in the Henle's layer were more apparent in the CC layer, which may reverse with disease resolution. Inner retinal thickening and middle retinal thickening corresponded with capillary rarefaction in the SCP and DCP, respectively. FAZ enlargement could not be accurately documented in all patients as many presented with macular edema during the acute phase. Figures 3 and 4 shows an OCTA 3X3mm scan in mild and severe intra retinal edema, respectively. Figure 5 shows CC layer with enface projection of active retinitis. Figure 6 shows an 8X8 mm scan which is useful to detect the extent of retinitis, but has poorer vascular resolution compared to 3X3 scans. Discussion OCTA has the ability to non-invasively provide details of retinal and choroidal vasculature, which helps us better understand the microvascular changes in eyes with retinitis, which cannot be delineated well on FFA due to the vascular leakage in inflammatory conditions. 11 Few studies in post fever retinitis have described changes in the SCP/DCP following dengue and chikungunya retinitis [15][16][17] Schreur et al. 21 in their study of retinal MA in patients with diabetic macular edema (DME) by OCTA found that MA with focal leakage and located in a thickened retinal area were more likely to be detected on OCTA. In their study MAs were located in intermediate and deep plexus. In our series of post fever retinitis, microvascular abnormalities were noted in the SCP and DCP with quantifiable changes in both the smaller and larger vessels. Capillary rarefaction areas corresponding to retinitis patches and pruning of vessels was seen in the active phase. The DCP showed profound capillary rarefaction when compared to the SCP due to the involvement of the middle retinal layers. Our series did not show any individual MAs on OCTA. The CC slabs showed signal void areas which can be attributed to shadowing caused by the overlying retinitis patch similar that reported by Shanmugam et al. 22 The regular vascular pattern or the "angio-architecture" in SCP and DCP was lost in active retinitis, the intraretinal edema and exudation causing an impression of vessel drop out. The flow void areas in the choriocapillaris layer are due to the shadow effect of the superficial edema on to the choroid resulting in loss of the regular coarse architecture. These changes are reversible in non-ischemic retinas once the active inflammation subsides. In a patient with post typhoid fever neuroretinitis, OCTA showed macular thickening and neuro sensory detachment. Choroidal imaging showed abnormal "patchy" flow voids in the choriocapillaris-likely suggestive of a sluggish blood flow or ischemia. Deep range imaging (DRI) of the choroid revealed increased choroidal thickness and dilated choroidal vasculature, indicating a concurrent choroidal inflammation 23 . In our series choroidal imaging had artifacts in acute stages due to intraretinal fluid. In cases where choroidal imaging was possible, we noted altered choroidal architecture with darker areas. We will be, in a future study of these patients analyse the choroidal architecture during the follow up of our patients. A study of OCTA in a patient with varicella retinal vasculopathy showed loss of capillary plexus in both SCP and DCP 24 . OCTA has also been useful in choroidal imaging as described in a case of sympathetic ophthalmia.OCTA of the choroidal vascular revealed flow void pockets initially at inflammatory stage, and this normalized over time into typical granular pattern after initiation of the treatment 25 . (c, d) are the respective images after fractal analysis. Regions in red pixels correspond to vessels, blue corresponds to large vessel spacing, and yellow corresponds to small vessel spacing. www.nature.com/scientificreports/ Despite the advantages of being non-invasive and repeatable, OCTA has certain limitations in active retinitis. Its interpretation can be challenging due to projection and motion artifacts and retinal edema due to active retinitis causing an impression of vessel drop out and a loss of the regular "angio-architecture" due to vessel displacements, pruning effects and non-flow areas in edematous areas. The interpretation of OCTA and particularly the FAZ is difficult in patients with gross macular edema and will need longitudinal follow up to assess for enlargement, distortion and possible ischemia. Other limitations include a relatively small field of view, inability to show leakage, and proclivity for image artifact due to patient eye movement/blinking. Manual segmentation can be tedious and time consuming. The variations in capillary density or vascular thickness are influenced by the type of segmentation. We overcame this limitation by having two observers performing the manual segmentation, comparing the findings and taking the average of the two readings. Conclusion Ours is the largest series of OCTA of retinal vasculature findings in post fever retinitis. Although the presumed etiology was different in our patients, they developed similar changes on OCTA. Quantitative analysis confirmed that the insult was more in the DCP. Serial follow up of these patients will help unravel the vascular changes on the road to recovery.
v3-fos-license
2020-10-16T05:04:11.324Z
2020-12-01T00:00:00.000
222386722
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "b877822e59fccd49dfd61ce1a818ad89d3e576aa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41653", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "b877822e59fccd49dfd61ce1a818ad89d3e576aa", "year": 2020 }
pes2o/s2orc
Observational study of therapeutic bronchoscopy in critical hypoxaemic ventilated patients with COVID-19 at Mediclinic Midstream Private Hospital in Pretoria, South Africa Background Flexible fibreoptic bronchoscopy (FFB) has been used for years as a diagnostic and therapeutic adjunct for the diagnosis of potential airway obstruction as a cause of acute respiratory failure or in the management of hypoxaemia ventilated patients. In these circumstances, it is useful to evaluate airway patency or airway damage and for the management of atelectasis. Objectives To evaluate the use of FFB as a rescue therapy in mechanically ventilated patients with severe hypoxaemic respiratory failure caused by COVID-19. Methods We enrolled 14 patients with severe and laboratory confirmed COVID-19 who were admitted at Mediclinic Midstream Private Hospital intensive care unit in Pretoria, South Africa, in July 2020. Results FFB demonstrated the presence of extensive mucus plugging in 64% (n=9/14) of patients after an average of 7.7 days of mechanical ventilation. Oxygenation improved significantly in these patients following FFB despite profound procedural hypoxaemia. Conclusion Patients with severe COVID-19 pneumonia who have persistent hypoxaemia despite the resolution of inflammatory parameters may respond to FFB with removal of mucus plugs. We propose consideration of an additional pathophysiological acute phenotype of respiratory failure, the mucus type (M-type). Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) is the novel coronavirus which causes COVID-19. At the time of writing (24th August 2020) and since its initial detection, more than 23 605 542 cases have been confirmed and 812 757 people have died worldwide. [1] In South Africa, the number of confirmed cases has continued to rise and is currently standing at 609 773 with 13 059 deaths since the first cases were reported in March 2020. [1] Eighty-one percent of patients with COVID-19 are asymptomatic while 14.1% present with severe disease and 4% are critically ill and require mechanical ventilation. [2] However, despite the continued global rise in mortality since the outbreak of SARS-CoV-2 in Wuhan, China in 2019, the highly infectious nature of the virus has resulted in limited use of bronchoscopy. It is being utilised primarily for diagnostic or management purposes in non-COVID-19 patients. [3] COVID-19 patients who require mechanical ventilation have been classified into two phenotypes according to Gattinoni [4] and these have been incorporated into the Surviving Sepsis Guideline: the L-and H-type. The L-type is characterised by low elastance (high compliance), is easy to ventilate, has low lung recruitability and may respond to early proning. The H-type is characterised by high elastance (low compliance) that resembles more closely patients with typical acute respiratory distress syndrome (ARDS) and is potentially recruitable. [5] The H-type may have a higher mortality with most patients requiring further interventions such as proning, airway pressure release ventilation (APRV) or even extracorporeal membrane oxygenation (ECMO). [6] The L-type theoretically can progress to the H type over time. Some of these patients in the L-or H-categories fail to improve their oxygenation despite optimal chemotherapy and mechanical ventilation. These patients have a prolonged ventilatory course, often complicated by secondary hospital-acquired sepsis with an associated high mortality. [7] It has been presumed that this represents a combination of irreversible pulmonary fibrosis and microvascular pulmonary thrombosis. [8] Currently, there are no studies to support the use of flexible fibreoptic bronchoscopy (FFB) as a therapeutic tool in these patients primarily because there is no obvious evidence of atelectasis or dynamic hyperinflation suggesting airway pathology. We nevertheless decided to perform FFB after the point of maximal care had been reached without improvement in oxygenation to assess the status of the airways and to see whether there would be an impact on oxygenation. Study population, setting and data collection We enrolled patients with laboratory confirmed SARS-CoV-2 infection who were admitted to the intensive care unit (ICU) at Mediclinic Midstream Private Hospital (MMPH) in Pretoria, South Africa, on 24th July 2020 until 4th August 2020. These patients had severe COVID-19 pneumonia with the following characteristics: severe refractory hypoxaemia despite maximal mechanical ventilatory support, including proning and significant deterioration from previous minimal ventilator settings. Observational study of therapeutic bronchoscopy in critical hypoxaemic ventilated patients with COVID-19 at Mediclinic Midstream Private Hospital in Pretoria, South Africa Maximal ventilatory settings were defined using a volume synchronised intermittent mandatory ventilation (SIMV) mode with a peak pressure >30 cmH 2 O, a fraction of inspired oxygen (FiO 2 ) of 1.0, oxygen saturation <90%, respiratory rate ≥36 breaths/min, inspiratory: expiratory ratio of 1:1, partial pressure of oxygen in arterial blood (PaO 2 ) <60 mmHg and no worsening of radiological features or evidence of mucus plugs. In the first seven patients, the oxygenation index (OI) was not measured utilising the PaO 2 and arterial saturation while on the same ventilatory parameters were used instead. In the remaining seven patients, 2 had an OI >40 indicating severe pulmonary compromise, 3 had an OI in the moderate range (25 -40) and 2 had an OI in the mild range. All laboratory tests and radiological assessments including plain chest radiography and computed tomography (CT) scan of the chest were performed at the discretion of the treating physician. Patients 1 and 8 had been airlifted from a peripheral hospital with oxygen saturations of 74% and 88% after having been ventilated on APRV mode for 8 and 7 days, respectively. During this time, there was no improvement in oxygenation and they were subsequently referred to MMPH for consideration for ECMO therapy and further management. Four of the other patients were transferred from a peripheral hospital to MMPH for pulmonology opinion after having been on mechanical ventilation for 2 to 4 days and the remainder of the patients were de novo admissions to MMPH. All patients underwent high resolution CT scanning which confirmed features of severe COVID-19 pneumonia according to the British Society of Thoracic Imaging recommendation. [9] All patients were receiving ventilatory support either with APRV or lung protective, low tidal volume and SIMV mode. Patients were proned during admission, either before ventilation (the de novo admissions) or during ventilatory support. Eight patients received antibiotics but the remaining six patients had stopped taking antibiotics for more than 3 days prior to bronchoscopy. Plain chest X-ray and an arterial blood gas were performed 1 hour before and 2 hours after bronchoscopy. Ethics approval (ref. no. M2008102) was obtained from the University of the Witwatersrand, Johannesburg. Informed consent for both the bronchoscopy and study participation was given by the next of kin. Bronchoscopy procedure The bronchoscopy was performed by a single pulmonologist while the patient was undergoing mechanical ventilation in a negative pressure room in the general ICU with all staff in full PPE (N95 masks, goggles, sterile gowns, face shields, double sterile gloves, head and shoe caps). The procedure was performed under general anaesthesia by an experienced anaesthetist. Results The study had initially enrolled 16 patients with severe COVID-19 pneumonia that was complicated by ARDS and who had undergone a bronchoscopy. However, 2 patients were excluded because 1 had a loculated effusion and the other had nosocomial fungal pneumonia. The remaining patients consisted of 9 males and 5 females (Table 1). More than 70% (n=10/14) of the patients were obese and 21% (n=3/14) were overweight (Table 1). Of the males, 11% (n=1/9) had class 3 obesity, 33% (n=3/9) had class 2 obesity and 22% (n=2/9) had class 1 obesity (Table 2). Of the females, 40% (n=2/5) had class 3 obesity, 20% (n=1/5) had class 1 obesity and 20% (n=1/5) were overweight ( Table 1). The remaining female was postpartum at 42 years of age. She had delivered a live infant weighing 1.25 kg at 29 weeks of gestation by caesarean section and although the initial APGAR score was low, the condition of the infant subsequently improved. The delivery was performed prior to, but not because of, the bronchoscopy. More than a quarter of the patients (n=4/14) had a combination of diabetes, hypertension and hyperlipidaemia, 1 had epilepsy, 2 had hypertension and diabetes, 3 had diabetes alone and 1 had hypertension alone. Half of the patients (n=7/14) were older than 60 years and 28% (n=4/14) had no known comorbidities (Fig. 1). The CT scan of the chest confirmed pneumonic changes consistent with a severe COVID-19 pneumonia in all the patients. Despite no evidence of mucus plugging or atelectasis on the chest radiograph, significant mucus impaction was found during the FFB. The X-rays of patients 1 and 3 pre-and postbronchoscopy are shown in Figs 4 and 5. Patients 2, 5 and 9 underwent bronchoscopy immediately after intubation and no evidence of mucus plug formation was observed but repeat bronchoscopy was performed after ~ 1.75 days. All the patients improved their PaO 2 and oxygen saturation and patients 8 -14 improved both PaO 2 and OI after bronchoscopy ( Table 2). Patients 2 and 4 showed the presence of thick gelatinous mucus within the 2nd and 3rd generation bronchi and removal of the mucus was associated with improvement in hypoxaemia despite no alteration of the mechanical ventilator settings. Patients 1, 3, 10 and 11 underwent emergency FFB after an average of 7.5 days on mechanical ventilation after having desaturated significantly with an FiO 2 of 1.0 without alteration of ventilatory parameters and no X-ray changes that could explain this deterioration (Table 2). Thick mucus plugs causing partial obstruction of both the main and smaller bronchi were visualised. A significant improvement in the PaO 2 occurred in these patients after the removal of the mucus plugs ( Table 2). Half of the patients (n=7/14) underwent bronchoscopy after day 7 of ventilation, which also showed the presence of gelatinous mucus and partial blockage of the endotracheal tube. The diameter of the working channel of the bronchoscope used was 2 mm, but in view of the tenacity of the mucus, biopsy forceps had to be used to facilitate extraction. Patients 1, 11 and 3 desaturated multiple times during the procedure, requiring manual bagging with a bag valve device and even required reintubation as there was difficulty extracting A B Fig. 3. Computed tomography (A) scan for patient 2 and fibrinous plugs in subsegmental bronchi. the mucus within the endotracheal tube. The total time for the procedure for patient 1 was 3 hours, with the lowest oxygen saturation recorded at 23% for 30 seconds. For patient 3, the procedure time was 2.5 hours with the lowest oxygen saturation recorded at 40% for 40 seconds and the procedure time for patient 11 was 55 minutes with the lowest oxygen saturation recorded at 36% for 24 seconds Patients 3, 5, 7 and 11 were extubated 72 hours after FFB to high -low nasal cannula and patients 1, 4, 6 and 9 are currently on minimal ventilator settings. The remainder of the patients were still on mechanical ventilation with FiO 2 >0.5 at the time of writing this report. Discussion We have demonstrated that some patients with severe COVID-19 pneumonia and persistent hypoxaemia despite resolution of inflammatory parameters may respond to FFB following removal of mucus plugs. Although patients have been classified into H-and L-types, it does appear that those who require prolonged ventilation present and behave in a similar manner to patients with classical ARDS. [10] Some patients fail to improve their oxygenation despite optimal mechanical ventilation and pharmacotherapy inclusive of corticosteroids. These patients have a prolonged ventilatory course often complicated by secondary hospital-acquired sepsis with an associated high mortality. [7] Most international thoracic societies do not recommend therapeutic bronchoscopy except for control of pulmonary haemorrhage or for selected patients with lung atelectasis. However, our study demonstrated that radiological changes may be insensitive for the detection of significant mucus plugging and atelectasis may be missed. It is likely that at least some of the ground glass alveolar infiltrates observed in COVID-19 patients may represent filling of the alveolar spaces by mucus with or without some degree of segmental atelectasis and may also be a factor involved even in those with comorbidities predicting a worse outcome. A study by Torrego et al. [11] confirmed the presence of mucus in the airways during bronchoscopy in 95% of 101 COVID-19 patients with an average ventilation duration of 6.6 days. Importantly, Earhart et al. [12] demonstrated that the use of the mucolytic dornase alfa in patients with COVID-19 improved outcomes and shortened duration of ventilation. A more recent randomised clinical trial in COVID-19 patients that received the oral mucolytic, bromhexine, showed that the benefit of bromhexine is maximised if started early and also showed that it can reduce respiratory symptoms, the need for ICU admission, intubation and mechanical ventilation, and mortality . [13] In our opinion, therapeutic FFB should be considered as an adjunctive therapy for COVID-19 patients with refractory hypoxaemia or even as a routine therapy around day 7 of mechanical ventilation if patients are slow to improve. It is critical that if hypoxaemia occurs during the procedure, oxygen delivery is maintained as patients appear to be protected from the effect of hypoxaemia so long as cardiac output and haemoglobin are maintained at the time of desaturation. [14,15] Therapeutic FFB to remove mucus plugs may be lifesaving and may reduce ventilator days and even mortality. We suggest that the routine use of mucolytics and thereafter bronchoscopy should be considered as rescue therapy before embarking on the use of ECMO. FFB is cheap, less invasive, and less complicated than ECMO. Airway obstruction by mucus plugs should be considered as an alternative explanation to Table 2 the H-type phenotype or lung fibrosis in some patients and perhaps an additional pathophysiological phenotype should be included, the mucus type (M-type).
v3-fos-license
2017-06-27T21:19:52.457Z
2017-06-20T00:00:00.000
7348467
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://thrombosisjournal.biomedcentral.com/track/pdf/10.1186/s12959-017-0139-z", "pdf_hash": "9fba9582ea625f783e63bcd54823d9633158c554", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41655", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9fba9582ea625f783e63bcd54823d9633158c554", "year": 2017 }
pes2o/s2orc
Procoagulatory changes induced by head-up tilt test in patients with syncope: observational study Background Orthostatic hypercoagulability is proposed as a mechanism promoting cardiovascular and thromboembolic events after awakening and during prolonged orthostasis. We evaluated early changes in coagulation biomarkers induced by tilt testing among patients investigated for suspected syncope, aiming to test the hypothesis that orthostatic challenge evokes procoagulatory changes to a different degree according to diagnosis. Methods One-hundred-and-seventy-eight consecutive patients (age, 51 ± 21 years; 46% men) were analysed. Blood samples were collected during supine rest and after 3 min of 70° head-up tilt test (HUT) for determination of fibrinogen, von Willebrand factor antigen (VWF:Ag) and activity (VWF:GP1bA), factor VIII (FVIII:C), lupus anticoagulant (LA1), functional APC-resistance, and activated prothrombin time (APTT) with and without activated protein C (C+/−). Analyses were stratified according to age, sex and diagnosis. Results After 3 min in the upright position, VWF:Ag (1.28 ± 0.55 vs. 1.22 ± 0.54; p < 0.001) and fibrinogen (2.84 ± 0.60 vs. 2.75 ± 0.60, p < 0.001) increased, whereas APTT/C+/− (75.1 ± 18.8 vs. 84.3 ± 19.6 s; p < 0.001, and 30.8 ± 3.7 vs. 32.1 ± 3.8 s; p < 0.001, respectively) and APC-resistance (2.42 ± 0.43 vs. 2.60 ± 0.41, p < 0.001) decreased compared with supine values. Significant changes in fibrinogen were restricted to women (p < 0.001) who also had lower LA1 during HUT (p = 0.007), indicating increased coagulability. Diagnosis vasovagal syncope was associated with less increase in VWF:Ag during HUT compared to other diagnoses (0.01 ± 0.16 vs. 0.09 ± 0.17; p = 0.004). Conclusions Procoagulatory changes in haemostatic plasma components are observed early during orthostasis in patients with history of syncope, irrespective of syncope aetiology. These findings may contribute to the understanding of orthostatic hypercoagulability and chronobiology of cardiovascular disease. Background It has long been observed that major cardiovascular and thromboembolic events show higher incidence after awakening, which has been partly explained by increased platelet activity during morning hours [1][2][3][4]. In parallel, a phenomenon of "orthostatic hypercoagulability", i.e. increase in plasma procoagulants independent of postural volume shift and extravascular fluid escape, has been reported [5][6][7]. The prothrombotic changes during passive orthostasis were first detected in healthy volunteers without history of syncope, cardiovascular or thromboembolic disease. Similar changes in coagulation factors were induced by lower-body negative pressure in volunteers who developed vasovagal reflex but, interestingly, those who did not develop the reflex showed no significant alterations in coagulation factors [8]. Changes in the coagulation system have also been detected during vasovagal reflex activation: a study performed in subjects with von Willebrand disease has demonstrated increased antigen concentration of von Willebrand factor (VWF:Ag), VWF-Ristocetin-cofactor, and factor VIII activity (FVIII:C) after fainting prompted by fear of venepuncture [9]. In a previous study, we have observed that von Willebrand factor is increased in patients with orthostatic hypotension compared with other patients investigated for suspected syncope, irrespective of body position [10]. However, we have also noticed that both groups demonstrated procoagulatory changes during passive orthostasis evoked by head-up tilt test (HUT). In this study, we took the opportunity to analyse plasma samples collected at rest and during the early phase of HUT to assess how passive orthostasis impacts coagulation factors. We aimed to test the hypothesis that passive orthostasis during HUT would evoke procoagulatory changes and that these changes would differ according to HUT diagnosis. Patients and inclusion/exclusion criteria The present study was performed from November 2011 to October 2012 as a part of the ongoing Syncope Study of Unselected Population in Malmo (SYSTEMA) project, a single-centre observational study on syncope aetiology [11]. During this period, 233 consecutive patients underwent head-up tilt test (HUT) due to unexplained syncope and/or orthostatic intolerance. Patients were included if they accepted blood sampling during HUT (n = 183), and excluded if they were on current oral anti-coagulation therapy with warfarin (n = 5), leaving 178 patients that were eligible for the final inclusion. None of the included patients had a diagnosed bleeding disorder. All patients provided written informed consent and were included in further analyses. The Regional Ethical Review Board of Lund University approved the study protocol including blood sampling during tilt testing. Head-up tilt test Study participants were requested to take their regular medication and fast for 2 h before the test, although they were allowed to drink water. The examination was based on a specially designed HUT protocol, which included peripheral vein cannulation, supine rest for 15 min and HUT with optional nitroglycerine provocation after 20 min of passive HUT according to the Italian protocol [12]. Beat-to-beat blood pressure (BP) and ECG were continuously recorded using a Nexfin monitor (BMEYE, Amsterdam, The Netherlands) [13]. The detailed description of the examination protocol has been previously published [14]. The final HUT diagnoses were concordant with the current European Society of Cardiology guidelines [15]. Blood sampling Blood was collected during supine rest before and after 3 min of 70°HUT (during passive phase, prior to optional nitroglycerine administration) in citrated tubes (BD Vacu-tainer®, 4,5 mL, 0,109 M sodium citrate). Samples were immediately centrifuged for 20 min at 2000 x g. Plasma was separated and immediately frozen at −70°C. Coagulation analyses APTT was performed at the accredited hospital laboratory using a BCS-XP analyzer (Siemens Healthcare, Marburg, Germany) with the Actin FSL reagents (Siemens Healthcare, Marburg, Germany), with and without addition of activated protein C (reference interval in healthy individuals for test without activated protein C, 26-33 s). The analysis of plasma fibrinogen concentration was performed using the Dade Thrombin reagent on a Sysmex CS-5100 analyser (Siemens). The reference interval has been established locally as 2-4 g/L. The coagulation analyses below were performed on a BCS-XP instrument (Siemens). The original plasma-based activated protein C resistance (APC-resistance) test was performed with COATEST APC Resistance test (Chromogenix, Mölndal, Sweden). Reference interval was >2.1. The activity of von Willebrand factor (VWF:GP1bA) was analysed with an immunoassay based on an antibody against glycoprotein 1b (GP1b) and a GP1b construct that binds VWF in the sample (Innovance VWF:Ac; Siemens). Reference interval established locally was 0,5-2.0 kU/L. Antigen concentration of von Willebrand factor (VWF:Ag) was measured with a latexenhanced immunoassays (Siemens). Reference interval was 0.6-2.7 kIU/L. Screening test for lupus anticoagulant (LA1) was performed with a clot-based assay: LA1 (Siemens). Reference interval was <42 s. Factor VIII activity assay (FVIII:C) was performed with a chromogenic substrate method (Coatest FVIII SP, Chromogenic). Reference interval was 0.5-2.0 kIU/L. Imprecision measured as a coefficient of variation was 2% for VWF:Ag (at 1.2 kIU/L), 3% for VWF:GP1bA (at 0.9 kIU/L), 4% for LA1: (at 40 s), 3% for FVIII: (at 0.8 and 1.5 kIU/L), and 2-4% for fibrinogen (at 1 g/L, 2%; at 3 g/L, 4%). Statistical analyses Independent samples Student's t-test was used to compare continuous variables between men and women whereas paired samples Student's t-test was used to compare changes in coagulation parameters measured in supine position and during HUT, respectively. Analyses regarding coagulation parameters were stratified according to sex and age (younger vs. older adults; 65 years). Changes in coagulation parameters during HUT according to diagnosis were specifically tested by comparing the changes in subjects with VVS versus all other diagnoses (including negative HUT) and comparing the changes in subjects with OH (classical + delayed form) versus all other diagnoses, using independent samples Student's t-test. Categorical variables were compared using Pearson's chi-square test. All analyses were performed using IBM SPSS Statistics version 23 (SPSS Inc., Chicago, IL, USA). All tests were two-sided whereby p < 0.05 was considered statistically significant. Results Patient characteristics stratified by gender are shown in Table 1. Men were older and had lower resting heart rate. Sixty-nine (38.8%) patients were diagnosed with vasovagal reflex syncope (VVS), whereas 49 (27.5%) patients met OH criteria. Further, 7 patients had carotid sinus syndrome, 7 postural orthostatic tachycardia syndrome, 9 initial OH, 5 psychogenic pseudosyncope and 17 (9.6%) underwent HUT without a definitive diagnosis (i.e. negative test). Table 2 shows data on measurements of coagulation markers in supine and after 3 min of HUT. All patients had a significantly increased level of VWF:Ag and fibrinogen after 3 min of HUT, whereas APC-resistance ratio and APTT were significantly decreased compared with resting supine values. There were no significant differences between supine and HUT regarding FVIII, VWF:GP1bA and LA1. All parameters were within the reference interval for healthy individuals. In gender-stratified analyses, only women had significantly increased fibrinogen and demonstrated a shorter LA1 time during HUT (see: Table 3). Stratification of study population by the age of 65 years showed similar distribution of changes in coagulation parameters (Table 4). A final diagnosis of VVS was associated with less increase in VWF:Ag during orthostasis (0.01 ± 0.16) compared to all other diagnoses (0.09 ± 0.17; p = 0.004). The changes in coagulation parameters were not different in patients diagnosed with OH compared with all other diagnoses (p > 0.05). Discussion The present study illustrates the impact of passive orthostatic challenge on haemostatic plasma components in a series of unselected patients with a history of unexplained syncope. During early phase of head-up tilt test, fibrinogen, APTT, VWF:Ag, and APC-resistance demonstrated significant procoagulatory changes. Although relative changes in the assessed coagulation markers were small, indeed, within the test imprecision range for fibrinogen, the directionality of these changes consequently pointed at the increased coagulability on standing. Fibrinogen, as acute-phase glycoprotein, may be elevated in any form of inflammation and is also involved in formation of blood clots not only through fibrinformation, but also through its effect on specific platelet receptors and blood viscosity. Higher levels of fibrinogen are associated with cardiovascular disease [16][17][18][19] and its degradation products accumulate in the atherosclerotic plaque [20]. Interestingly, fibrinogen increased most in women who also demonstrated procoagulatory LA1 changes. The exact mechanisms behind these genderspecific propensities are not known and were not the subject of this study, however, hormonal differences may be one of the possible explanations [21]. Thus, orthostatic procoagulatory changes in fibrinogen, LA1, and FVIII seem mainly to affect women who are at higher risk of thromboembolic events compared with men [22]. As thromboembolic events are more common in the morning hours [3], orthostatic increase in fibrinogen and FVIII may act as a facilitator of thrombus generation after awakening and assuming an upright position, in a relative dehydrated state after a night's rest. Notably, APTT was reduced on orthostasis irrespective of the analysed subgroup. Shortening of APTT usually indicates increased activity of FVIII in association with inflammatory conditions [23]. As APTT involves both fibrinogen and FVIII, which first and foremost increased in women, it seems that other factors such as prothrombin, V, IX, X, XI, and XII may have undergone orthostatic changes too. Unfortunately, our test panel did not detect this. Shortening of APTT, although within the normal range, may have thromboembolic consequences, as indicated by a previous study [24]. We believe that shortened APTT indicates a hypercoagulable state, as was also shown in a study by Mina et al. [25]. This is supported by other clinical studies performed on patients with acute coronary events [26,27]. Furthermore, we observed elevated levels of VWF during orthostasis. It has been reported that higher levels of VWF are associated with a 3-fold increased risk for severe coronary heart disease [28][29][30]. Our results are consistent with other studies, linking changes in VWF and FVIII to the vasovagal reflex, artificially induced orthostatic stress and presyncope [5,6,8]. The von Willebrand factor is the largest plasma protein and plays a key role in primary haemostasis as cofactor in platelet adhesion and platelet aggregation. This platelet adhesion is promoted by a platelet membrane receptor glycoprotein (GPIb -IX-V) when circulating VWF attaches to the sub-endothelium collagen and serves as a bridge between tissue and platelets [31]. Previous studies performed on healthy subjects have shown that platelet aggregability is increased in the morning, when the incidence of cardiovascular events such as myocardial infarct, sudden cardiac death and transient myocardial ischemia is higher [1,[32][33][34][35]. However, data on the changes in haemostatic plasma components in the same settings are very sparse and we believe that this study may contribute to the understanding of cardiovascular chronobiology. Our results confirm that changing from supine to upright body position induces activation of the coagulation system and promotes a hypercoagulable state during early orthostasis. The physiological role of orthostatic hypercoagulability may be to protect the body against increased bleeding risk during activities of daily life or be a result of higher hydrostatic pressure in the lower body, which will activate the endothelial response and release of coagulation factors. The activation of haemostatic plasma components may support the hypothesis of tight cooperation between procoagulatory factors and platelets, and its impact on the development of cardiovascular events. Whether or not cardiovascular events such as myocardial infarct, sudden cardiac death, transient myocardial ischemia, and stroke during morning hours are promoted by synergic effects of both orthostatic hypercoagulability and increased platelet aggregability remains to be further explored. Finally, patients diagnosed with vasovagal reflex syncope (VVS) demonstrated less pronounced changes in vWF:Ag than the rest of study population, whereas those with OH did not differ from non-OH patients. The former may be due to relatively younger age of VVS patients, although this study does not allow further speculations on the cause of this difference, whereas the latter implies that procoagulatory orthostatic changes are independent of BP fluctuations on standing. Limitations The conclusions of our study may be underpowered due to some important limitations (listed below). Thus, we would like to emphasize that the current study may be useful as a "proof of concept" for a larger and improved study design, in which these limitations could be overcome. No control group including healthy individuals was included and there were no measurements of coagulation parameters in subjects that did not undergo HUT. As the participants of the current study are part of the ongoing larger SYSTEMA project [11], no prospective sample size assessment was done, prior to the study. We were not able to measure specific markers of activated coagulation, such as thrombin-antithrombin-complex (TAT) or D-dimer, which would have been informative. Furthermore, since data on hereditary thrombophilia and levels or function of platelets were not registered, it was not feasible to separate possible contributions of these variables to hypercoagulability. Procoagulants were measured after 3 min of HUT only and we do not have data on how the prolonged standing might influence the assessed coagulation factors. However, determination of early orthostatic changes may have precluded the possible effects of intravascular volume escape observed during prolonged standing on the concentration of procoagulatory proteins. Also of relevance, even though the observed rise in coagulation markers is in direction during HUT, we cannot exclude a similar reaction if subjects were to be provoked by physical stress that did not involve orthostasis (such as on a supine ergometer bicycle). Finally, our study design and the fact that we did not measure potentially important coagulation proteases such as thrombin, factor Xa and tissue factor involved in atherosclerosis development, precludes us from drawing any conclusions on whether syncope induced hypercoagulability might also contribute to the development of atherosclerosis. Conclusions Procoagulatory changes in haemostatic plasma components can be observed during early phase of passive orthostatic challenge irrespective of syncope diagnosis. These findings may contribute to the understanding of orthostatic hypercoagulability and chronobiology of cardiovascular disease.
v3-fos-license
2020-08-20T10:11:40.594Z
2020-08-18T00:00:00.000
225377666
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.rbcsjournal.org/wp-content/uploads/articles_xml/1806-9657-rbcs-44-e0190179/1806-9657-rbcs-44-e0190179.pdf", "pdf_hash": "2ae338884b9135041354642035d90df276a7b009", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41656", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "edfb2c16245943a4f7e3786cc881af63ed5793f8", "year": 2020 }
pes2o/s2orc
Organic material combined with beneficial bacteria improves soil fertility and corn seedling growth in coastal saline soils ABSTRACT Soil salinity is a major abiotic stress on plant growth in coastal saline soil. The objective of this study was to screen the optimal combination of organic materials with beneficial bacteria for application under real field conditions to improve coastal saline soil. A two-factor pot experiment was carried out with corn in coastal saline soil for 26 days. In the naturally aerobic environment, a split-plot experiment was conducted with different rates of organic materials (organic fertilizer and mushroom residue) [...] INTRODUCTION The global soil area affected by salinity exceeds 8 × 10 8 hectares (FAO, 2000). China has 1.3 × 10 7 hectares of coastal land with saline-alkali soil (Qin et al., 2002). The range of soil salinization has been increasing every year, due to the unreasonable use of saline soils, while the arable land area is decreasing (Gupta and Huang, 2014). Furthermore, business, industry, and housing are continuously occupying more agricultural land, resulting in a further reduction of available farmland. Coastal saline soil is an important reserve of virgin farmland. However, its poor soil structure may cause soil deterioration and reduce crop growth and yield due to macronutrient deficiency, along with specific ion damage and nutrient disorder in the plant roots caused by NaCl (Farooq et al., 2015;Zhou et al., 2018). The world population is expected to reach 8.9 billion by 2050, with increases particularly in developing countries (Cristol, 2003). The population increase will also lead to a rising demand for grain production (Godfray et al., 2010). Therefore, it is urgent to improve coastal saline soil, increase soil fertility and crop yields, and recover areas degraded by misuse. The application of inorganic fertilizer may increase crop yields for short periods, but the long-term accumulation of mineral salts will aggravate salinization in coastal saline soil, reduce its quality and pollute the environment (Wu, 2011;Meena et al., 2019). In recent years, more attention has been paid to soil-plant-microorganism interactions. There are many kinds of microorganisms in the soil, which play an important role in soil nutrient cycling and promoting plant growth. The amount of inorganic fertilizer should be reduced and partly replaced by organic and microbial fertilizers to improve the coastal saline soil without affecting the environment (Adesemoye and Kloepper, 2009;Kumar et al., 2009). Moreover, this measure can stabilize the resistance and resilience of the soil microbial system and contribute to the maintenance of the diversity and stability of the soil ecosystem. Organic materials can improve soil nutrients and promote crop growth in saline soils. Recent studies pointed out that the use of different beneficial bacteria induced a beneficial effect on plant growth under salt stress (Zhang et al., 2016;Liu et al., 2019). Therefore, we hypothesized that organic material combined with beneficial bacteria could improve parameters related to corn seedling growth under saline conditions. Coastal soils have low contents of available phosphorus, which is mainly due to the high content of calcium ions, which fixate and precipitate most of the available phosphorus (Kaur and Reddy, 2015). Potassium is abundant in coastal saline soils, but about 90~98 % of it is found in silicate minerals, such as potassium feldspar and mica (Goldstein, 1994), i.e., most of the potassium is not readily available and cannot be absorbed and utilized by plants (Zorb et al., 2014). Phosphorus-and potassium-solubilizing bacteria (PSB and KSB, respectively) can improve the phosphorus activity and available K content in the soil rhizosphere by increasing the dissolution and release of these nutrients . Organic materials combined with beneficial bacterial not only promote the production of vitamins, hormones, and enzymes and stimulate plant growth, but also improve the diversity of soil microorganisms and enhance the stability of the soil ecosystem (Singh et al., 2011). In addition, organic material itself contains a large amount of nutrients, and application to coastal saline soil with poor soil fertility can effectively increase the available nutrients in the soil (Wu et al., 2018). Thus, our second hypothesis is that organic materials combined with beneficial bacteria can improve the nutrient availability in coastal saline soil, especially of phosphorus and potassium. We tested our hypotheses by growing corn plants in collected soil and adding organic matter and beneficial bacteria. We also measured the soil pH and total salt content to better understand the effects of organic materials and beneficial bacteria on corn growth and nutrient availability. At the same time, the optimal dosage of organic matter and beneficial bacteria under salt stress was determined by comprehensive evaluation of improvement, to provide a basis for the development of sustainable agricultural practices for coastal saline soil. The experimental organic material was a combination of organic fertilizer (produced by Shandong huade chemical technology Co. Ltd.) and mushroom residue (produced by Taian Academy of Agricultural Sciences). The respective components of the organic fertilizer and mushroom residue were as follows: organic carbon contents 325.24 and 471.82 g kg -1 ; total N contents 40.52 and 9.98 g kg -1 ; total P contents 58.03 and 10.35 g kg -1 ; total K contents 2.33 and 22.95 g kg -1 ; and moisture contents 19.03 and 60.00 %. According to the optimum C/N ratio for microbial growth, the organic materials consisted of organic fertilizer and mushroom residue and had a C/N ratio of 23:1. The experimental beneficial bacteria consisted of phosphorus-and potassium-solubilizing bacteria (PSB and KSB, respectively), produced by Guangzhou microelements and biotechnology Co. Ltd. The numbers of viable bacteria were ≥2 × 10 10 cfu g -1 and ≥2 × 10 10 cfu g -1 , respectively. The composite bacteria included Bacillus sp. and Brevibacillus sp. Luria-Bertani (LB) medium (solid and liquid media) was used in the preparation process of beneficial bacteria fluid. About 1 g bacteria powder of PSB and KSB was suspended in 99 mL sterile distilled water. The suspensions were shaken, subjected to serial dilutions (10 -1~1 0 -9 ), and 0.1 mL of 10 -7~1 0 -9 dilutions was spread on solid medium in triplicate and incubated at 28~30 ℃ for 72 h. Colonies of target bacteria were inoculated on liquid medium, shaken and cultivated at 37 ℃ for 18 h. At this time, the bacteria had reached the logarithmic growth phase. Then the bacterial solution was diluted and spread on plates with solid medium. The counted number of colonies per plate was valid if it was between 30 and 300 cfu plate -1 . Colonies were spread on liquid medium, shaken, and cultivated at 37 ℃ for 18 h. The KSB solution concentration was 74 × 10 6 cfu mL -1 and that of the PSB solution 97 × 10 6 cfu mL -1 . The two types of beneficial bacterial solutions were preserved for later use. Experimental design The split-plot design consisted of 10 treatments (including the control) with three replications. The pot experiment lasting for 26 days was carried out at the experimental station of Shandong Agricultural University. The experimental site has a warm temperate continental monsoon climate, the average precipitation of this region is 43.6 mm, the average high and low temperature in May is 26 and 15 ℃, respectively. Our pot experiment was conducted under field-like conditions. Coastal saline soil of the Wudi experimental station was air-dried and sieved (<2 mm) for the pot experiment. One kilogram of natural un-sterilized soil was thoroughly mixed with organic materials (C/N ratio of 23:1) at rates of 2 % (F2), 4 % (F4), and 6 % (F6) of dry soil weight. The soil was packed into clay pots (φ 16 cm, height 18 cm), which were buried two-thirds in the ground. Rev Bras Cienc Solo 2020;44:e0190179 On May 1, 2016, six corn (variety: Zhongzhong 8) seeds were sown at a depth of about 3 cm and thinned to three plants per pot after emergence. The mixed beneficial bacteria solution was applied close to the roots of each plant. The application rates of the different treatments were 1 × 10 8 (B1), 2 × 10 8 (B2), and 3 × 10 8 (B3) cfu plant -1 , respectively. Treatments B1 and B2 were supplemented with distilled water. For the control experiments, bacteria cells were killed by autoclaving, or sterile phosphate buffer was used. During the experiment, the pots were watered weekly with distilled water to field capacity (70 %). Other agronomic practices were applied when necessary during the growth process. The experimental treatments are shown in table 1. Plant and soil measurements and analysis The corn seedling height and stem diameter were determined with a measuring tape and Vernier caliper. On May 26, 2016, SPAD values per plant were read six times in the field, and the average value was determined with a Minolta SPAD-502 chlorophyll meter. Each treatment was repeated three times. At 26 days after planting, the whole plant of each pot was harvested. Both shoots and roots were washed with distilled water and stored in brown paper bags. The corn seedlings were oven-dried at 105 ℃ for 30 min and then at 65 ℃ to constant weight of the dry biomass. At the same time, soil samples of about 100 g were collected, air-dried, and analyzed. The soil pH(H 2 O) was determined at a 1:2.5 soil/liquid ratio, according to Jackson (1973); soil total salt content by the dry residue weight method (1:5 w/v water) according to Bao (2000); soil total nitrogen content by the Kjeldahl method; soil alkali-hydrolyzed nitrogen by alkali-hydrolysis diffusion as described by Bao (2000); soil available phosphorus by molybdenum-antimony colorimetry (Murphy and Riley, 1962); and soil available potassium by flame photometry (Bao, 2000). Statistical analyses One-way analysis of variance (ANOVA) was used to analyze each data set, and means were separated by Tukey's post hoc test (p<0.05). Correlation analysis was used to analyze the dependence of soil chemical properties and cluster analysis to classify the different treatments. All statistical analyses were performed using software SPSS 19.0. To avoid multicollinearity among indexes, principal component analysis (PCA) was used to construct a comprehensive evaluation function. Ultimately, the higher the score of the comprehensive evaluation, the better the soil fertility and seedling growth. The steps of comprehensive evaluation are as follows: according to the original 10 indexes, PCA is used to obtain the two principal components with a contribution rate greater than Taking the rate of variance contribution of the principal component as weight (λ i ), the comprehensive score (F i ) is given by equation 2: Soil chemical properties Total N, alkali-hydrolyzed N, available P, and available K increased significantly in the treatments with organic materials and beneficial bacteria when compared with the same soil properties of the control (p<0.001; Table 2). There were significant positive correlations between total N, alkali-hydrolyzed N, available P, and available K (p<0.001; Table 2). The addition of organic materials and beneficial bacteria increased soil total N content, which was significantly different from the control ( Table 2). The soil total N content increased significantly in treatment F6B3 (4.78~47.43 %), compared with the other treatments. Except for F6B3, there were no significant differences in soil total N among all other FB treatments. Soil alkali-hydrolyzed N contents of treatments F6 were significantly higher than of the control and increased significantly in F6B3 (25.42 %), compared with the control. No significant differences in soil alkali-hydrolyzed N were observed among all FB treatments except F6B2. The combined PSB and KSB application could effectively enhance the soil available P and K contents (Table 2). Soil available P contents of the different beneficial bacteria treatments with the same amount of organic materials (F4 and F6) increased significantly with the higher bacteria dosages. Treatment F6B3 increased the soil available P contents The increase in organic materials and beneficial bacteria decreased the soil pH(H 2 O) ( Table 2). Although the difference between F6B3 and control was 0.16 units, the differences in soil pH were not significant among the treatments (p = 0.175). Soil total salt content of all treatments decreased significantly with the increase in organic materials and beneficial bacteria dosage. Soil total salt content of F4B3, F6B2, and F6B3 were significantly (11.67, 10.00, and 12.50 %) lower than in the control treatment. Compared to F2B1, F6B3 significantly reduced the total salt content by 12.72 %. Correlation analysis showed that soil pH(H 2 O) was significantly negatively correlated with soil available P and K (p<0.05), and total salt content was highly significantly negatively correlated with total N, alkaline N, available P and K (p<0.01). Rev Bras Cienc Solo 2020;44:e0190179 Corn seedling growth Dry biomass, stem diameter, seedling heights, and SPAD readings of corn seedlings of all treatments increased significantly with the increase in the amount of organic materials and beneficial bacteria (Figure 1), and any two parameters were significantly positively correlated in all treatments (p<0.05). Dry biomass of corn seedlings increased significantly with the increase in organic material rates (Figure 1a). Dry seedling biomass of F6B3 was not significantly different from that of F4B3 and increased by 28.21~156.90 % compared with the other FB treatments. Also, F6B3 increased the seedling stem diameter by 25.94 and 8.70~27.23 %, compared with the control and all other FB treatments, respectively. In addition, post-hoc multiple comparisons showed that there were no significant differences in corn stem diameter between the control and all FB treatments (except for treatment F6B3). Comprehensive evaluation function The Euclidean distance was used as inter-treatment distance and the inter-group connection as clustering method. Soil total nitrogen, alkali-hydrolyzed nitrogen, available P and K, total salt, pH(H 2 O), plant height, stem diameter, dry biomass, and SPAD readings were used as identification indexes. Cluster analysis was applied to 10 different treatments to establish the cluster tree graph (Figure 2). Results showed that the control was separated into a single class, while F6B3 and F4B3 were grouped in the same class. However, due to the lack of organic matter in coastal saline soil, applying additional amounts of organic material (F6B3) can contribute to a continuous improvement of coastal saline soil over time. Principal Component Analysis (PCA) was used to comprehensively evaluate the 10 determined indexes of soil properties and corn growth. Two principal components were obtained by PCA, the first principal component eigenvalues were 6.729, the variance contribution rate 49.865 %; the second principal eigenvalue 1.000, and the variance contribution rate 27.428 %. Therefore, the cumulative contribution rate of the first and the second principal component was 77.293 %, which accounted for most of the variation. The respective loads of two principal components related to the original variables are listed in table 3. The first principal component was the component with the largest amount of information, including available P, available K, seedling height, dry biomass, total N, total salt, SPAD reading, stem diameter (with an absolute loading value of >0.75). The second principal component was soil pH (H 2 O). Therefore, the best indexes to evaluate the improvement effect in this study were the contents of available P and available K. F=0.336X 1 +0.329X 2 +0.302X 3 +0.262X 4 +0.315X 5 -0.241X 6 +0.307X 7 +0.221X 8 +0.293X 9 -0.081X 10 Eq. 6 According to soil fertility and growth parameters of corn seedlings under different treatments, 10 comprehensive scores were obtained by using the comprehensive evaluation function. The score of F6B3 was the highest (Table 3), which indicated that the coastal saline soil was best treated with organic materials of 6 % per kilogram of dry soil weight and with 3 × 10 8 cfu plant -1 of beneficial bacteria to increase soil nutrients and promote maize growth. DISCUSSION Nutrient availability and soil salinity are the main factors affecting the microbiota (Jiang et al., 2019). Salt stress not only reduced the solubility and availability of soil nutrients (Imran et al., 2018) but also impaired nutrient uptake by the corn roots and affected plant growth (Gong et al., 2011). Our results showed that the soil nutrient contents and corn seedling growth were significantly higher in treatments with organic matter and beneficial bacteria than those of the control, consistent with previous studies (Shrivastava and Kumar, 2015;Liu et al., 2019). According to Wichern et al. (2006), organic materials reduced the negative effects of salt on the microbiota. A recent study showed that bio-organic fertilizers can potentially stimulate beneficial microbiota, changing the composition and abundance of microorganisms (Qiao et al., 2019). In addition, the C:N ratio tested in our experiment was more appropriate for beneficial bacteria activity (Yazdanpanah et al., 2013). Consistent with our first hypothesis, the dry biomass, height, and stem diameter of corn seedlings were significantly higher in F6B3 than in the other experimental treatments. In addition, the application of organic materials and beneficial bacteria can improve the SPAD reading, indicating that fertilization is beneficial to increase the relative chlorophyll content of corn. The higher relative chlorophyll content not only ensured a longer photosynthetic period of corn leaves, but also contributed to a higher net photosynthetic rate. Many researchers stated that growth and yield traits of corn could be significantly improved by the addition of PSB and KSB, compared with the control (Yadav et al., 2018;Feng et al., 2019). Beneficial bacteria can avoid ion toxicity by regulating various physiological processes of plants in saline soil (Ilangumaran and Smith, 2017), with improved water-salt balance, and ion balance under saline conditions by regulating the transpiration rate, carbohydrate transport, and K + transporter activity of corn (Marulanda et al., 2010;Rojas-Tapias et al., 2012). Beneficial bacteria can also cause the accumulation of cell secretions such as hormones and organic osmotic fluid to protect the plants against the effects of salt stress (Kang et al., 2014;Bhattacharyya et al., 2015). Adding appropriate beneficial bacteria to the soil can dissolve and release nutrients, which can meet the nutrient demand of corn seedlings, effectively alleviate soil salt stress and promote crop growth. The results of our experiment showed that organic materials and beneficial bacteria could decrease the salinity of coastal saline soil, especially treatment F6B3, and significantly reduced the soil total salt content by 12.50 % compared with the control. Previous studies have reported that the addition of organic materials can reduce the soluble Na + content, sodium solubility, and effectively decrease salinity (Wang et al., 2014). In addition, the increase of organic composition improved the stability of aggregates and physical properties of saline soils, as well as soil porosity and soil water conductivity (Rashid et al., 2016;Cong et al., 2017). In this experiment, the pH value did not decrease significantly with the increase in organic material and beneficial bacteria dosage. ANOVA showed that there was no significant difference in pH between different treatments. However, another study reported that beneficial bacteria produced a large amount of organic acid in the soil, which significantly reduced the soil pH (Ul Hassan et al., 2017). The results of our experiment may be attributable to the organic matter that weakened the effect of organic acid produced by beneficial bacteria so that the pH decrease is not significant in the coastal saline soil. Our second hypothesis that organic material combined with beneficial bacteria application would increase nutrient availability of coastal saline soil was confirmed in all fertilization treatments, especially F6B3. The content of total nitrogen and alkali-hydrolyzed nitrogen in the soil of F6B3 was increased by 47.43 and 25.42 %, respectively, compared with that of the control. In the study, the increase of soil nitrogen may be due to the effect of nitrogen fixation by Bacillus in PSB and KSB. According to Sahin et al. (2004), Bacillus has some nitrogen fixation effect, thus increasing soil nitrogen content. Similar results were also reported by Meena et al. (2015a) and Kumar et al. (2015). of organic acids and the production of indole acetic acid and siderophores (Tao et al., 2008;Richardson and Simpson, 2011), and enriched soil available phosphorus (Chang and Yang, 2009;Banerjee et al., 2010). Previous studies had shown that PSB application could increase the availability of phosphorus, reduce the amount of phosphorus fertilizer by 50 %, with no significant reduction in corn yield (Koliaei et al., 2011). At the same time, soil available P was strongly negatively correlated with soil pH (Chen et al., 2006). The soil available K content was significantly increased with the addition of organic materials and beneficial bacteria, F6B3 promoted an increase of 0.96~36.25 % compared with all other FB treatments and there was a significant negative correlation between soil available K content and soil pH. Studies have shown that KSB could significantly increase potassium solubilization in soil minerals and that soil available K accompanied the decrease in soil pH (Basak and Biswas, 2009;Meena et al., 2015b). This might be due to the organic acids secreted by KSB, which dissolved mineral K directly or chelated silicon ions resulting in the release of the K ions from soil minerals (Parmar and Sindhu, 2013). However, Maurya et al. (2014) found that the bacteria caused the lowest soil pH with the smallest amount of available K released from soil mica, which was contrary to our studies. Further studies are necessary to determine the mechanism of KSB dissolving soil mineral potassium. At the same time, a series of studies have shown that adding organic materials or PSB and KSB inoculation can not only significantly improve nutrient availability of corn, but also promote nutrient uptake of plant roots, especially of phosphorus and potassium (Nakayan et al., 2013;Kaur and Reddy, 2015). This study confirmed that the application of beneficial bacteria and organic materials is promising to improve the coastal saline soil, raise the soil nutrient availability and promote corn seedling growth, by the exploitation of soil-plant-microorganism interactions. The best combination of the two tested materials of this study to alleviate salt stress, improve soil nutrients, and promote corn growth in coastal saline soil consisted of organic materials applied at 6 % of the soil dry weight, together with 3 × 10 8 cfu plant -1 beneficial bacteria that decreased the salinity and enriched the soil with more available P and K than the other treatments. CONCLUSION The contents of soil nutrients, especially the key elements of the coastal saline soil fertility nitrogen, phosphorus and potassium, were increased significantly by applications of organic materials and beneficial bacteria under local salt stress. The soil total salt content decreased with the increase in organic material and beneficial bacteria, which alleviated sodium ion toxicity. Therefore, of the tested treatments, the application of 6 % organic materials with beneficial bacteria of 3 × 10 8 cfu plant -1 had the best effects on soil fertility and salt reduction of coastal saline soil. In addition, the improvement caused by organic materials and beneficial bacterial on coastal saline soil fertility and salt stress significantly promoted corn seedling growth. To sum up, the application of organic materials of 6 % of soil dry weight with beneficial bacteria of 3 × 10 8 cfu plant -1 could be further exploited as a very promising technical measure for grain production in coastal saline soil areas. Nonetheless, further field experiments are necessary to corroborate the findings of this pot study and to define the most effective practice for saline soil improvement and crop production in the future.
v3-fos-license
2020-12-10T09:08:03.287Z
2020-12-03T00:00:00.000
231729120
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcwomenshealth.biomedcentral.com/track/pdf/10.1186/s12905-021-01188-6", "pdf_hash": "b81ec603ce215544424d471826b2860dde9cae20", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41658", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4f371722b92a8312132be3e2250357bc5b93e559", "year": 2021 }
pes2o/s2orc
Risk factors for the development of tubo-ovarian abscesses in women with ovarian endometriosis: a retrospective matched case–control study Background The purpose of this study was to assess the risk factors associated with the development of tubo-ovarian abscesses in women with ovarian endometriosis cysts. Methods This retrospective single-center study included 176 women: 44 with tubo-ovarian abscesses associated with ovarian endometriosis and 132 age-matched (1:3) patients with ovarian endometriosis but without tubo-ovarian abscesses. Diagnoses were made via surgical exploration and pathological examination. The potential risk factors of tubo-ovarian abscesses associated with ovarian endometriosis were evaluated using univariate analysis. The results (p ≤ 0.05) of these parameters were analyzed using a multivariate model. Results Five factors were included in the multivariate conditional logistic regression model, including in vitro fertilization, presence of an intrauterine device, lower genital tract infection, spontaneous rupture of ovarian endometriosis cysts, and diabetes mellitus. The presence of a lower genital tract infection (odds ratio 5.462, 95% CI 1.772–16.839) and spontaneous rupture of ovarian endometriosis cysts (odds ratio 2.572, 95% CI 1.071–6.174) were found to be statistically significant risk factors for tubo-ovarian abscesses associated with ovarian endometriosis. Conclusions Among the factors investigated, genital tract infections and spontaneous rupture of ovarian endometriosis cysts were found to be involved in the occurrence of tubo-ovarian abscesses associated with ovarian endometriosis. Our findings indicate that tubo-ovarian abscesses associated with ovarian endometriosis may not be linked to in vitro fertilization as previously thought. Background Tubo-ovarian abscess (TOA) is a complex and severe complication found in 15-34% of patients with pelvic inflammatory disease (PID) [1,2]. PID and TOA occur more frequently and are more severe in women with endometriosis than in those without endometriosis [3]. Researchers have demonstrated that no difference in tubal patency and morphological alterations between patients with ovarian endometriosis and deep infiltrating endometriosis (DIE). The above findings suggested that endometriosis can aggravate tubal adhesions and distortions through intrinsic pathological mechanisms such as inflammatory microenvironment independent of disease type or severity [4]. A TOA associated with ovarian endometriosis (OE-TOA) is a potentially life-threatening condition [5], which is also related to other morbidities, such as infertility, chronic pelvic pain, and ectopic pregnancy [6]. Several risk factors for PID and TOA have been identified, including young age, multiple sexual partners, sexually transmitted infections, chlamydia and gonorrhea infections, uterine instrumentation, interruption of the cervical barrier, hysterosalpingography, hysteroscopy, and in vitro fertilization (IVF) [7][8][9]. However, more comprehensive studies on the risk factors for OE-TOA are still needed. Only a few studies have reported that IVF or oocyte retrieval plays an important role in the development of OE-TOA [10,11]. This is not surprising considering the high rate of infertility among individuals with endometriosis, with a prevalence rate of 30-50% [12,13]. This may result in a vicious cycle of endometriosis leading to infertility, causing the need for IVF, which then leads to TOA, and further infertility. Unfortunately, whether IVF and oocyte retrieval are risk factors for OE-TOA still remains controversial. The aim of this study was to explore the risk factors associated with OE-TOA and to provide an experimental basis for its early diagnosis, prevention, and cure. The secondary objective was to evaluate whether IVF increases the risk of OE-TOA. Methods This was a retrospective comparative study performed in a single medical center. The study was approved by the hospital's ethics committee, and informed consent was obtained from each patient that took part in this study. The medical records of 5595 consecutive patients diagnosed with ovarian endometriosis (OE) who underwent laparoscopy or laparotomy at Tianjin Central Hospital of Gynecology and Obstetrics between January 1, 2010, and December 1, 2019, were retrospectively reviewed. Of these patients, 176 were evaluated in this study and were divided into a case group (composed of 44 patients with OE-TOA) and a control group (composed of 132 non-OE-TOA patients), based on the following inclusion criteria: the indication for surgery was the presence of an adnexal mass (greater than 4 cm in diameter). The case group and the control group were determined according to the following criteria. (1) The case group: pus observed during surgery and a confirmed diagnosis of OE-TOA by pathological examination (the pathological criteria included endometrial glands and stroma within the ovarian cyst, and neutrophils infiltrating into the capsule, with or without acute pyogenic salpingitis). (2) The control group: no pus observed during surgery and pathological examination indicated only OE cysts. The exclusion criteria were: (1) cancers of pelvic organs; (2) appendiceal abscesses; (3) appendicitis; and (4) cases with incomplete or unknown data (Fig. 1). All diagnoses were confirmed during surgery and later by a pathologist. For each TOA case, three contemporaneous non-TOA control patients were selected from the electronic health record-derived data and matched by age (± 3 years). For both cases and controls, clinical data, demographic data, and putative risk factors for OE-TOA were extracted from the electronic health record, including age, marital status, gravidity, parity, infertility, previous PID, history of ectopic pregnancy, previous removal of OE cysts, previous appendectomy, cesarean delivery, IVF, uterine cavity surgery within 15 days, presence of an intrauterine device (IUD), lower genital tract infection, spontaneous rupture of ovarian endometriotic cysts, dysmenorrhea, diabetes mellitus, hypertension, smoking status, and carbohydrate antigen 125 (CA125). Statistical analyses Continuous variables that followed a normal distribution pattern and had homogenous variance were expressed as means ± standard deviations and were compared using Student's t-test. Non-normally distributed data were expressed as medians and analyzed using the Mann-Whitney U test. Intergroup differences in categorical variables were compared using the chi-square test or Fisher's exact test. In addition, a p value of ≤ 0.05 was used in the univariate analysis for inclusion of putative risk factors. Multivariate conditional logistic regression analysis was used to evaluate risk factors. Data processing and statistical analyses were completed using SPSS version 19.0 (IBM, Armonk, NY, USA). p values of < 0.05 were considered statistically significant. Results A total of 176 women were evaluated in this study. Among these women, 44 were diagnosed with OE-TOA during the study period. The control group consisted of 132 non-OE-TOA patients. The demographic data of the two groups were comparable ( Table 1). The distribution of risk factors associated with OE-TOA in both the case and control groups was also tabulated ( Table 2). Histories of PID and ectopic pregnancy were found in similar proportions of women in both groups. No statistically significant differences were found in operation history, including removal of OE cysts, appendectomy, and cesarean delivery, between the two groups. A higher proportion of women with TOA than without TOA had undergone IVF (6.8% vs. 0.8%, p = 0.049). Three patients in the case group and two patients in the control group had undergone uterine cavity operations within 15 days of admission; this difference was not statistically significant (p = 0.100). The number of women with IUDs in the case group was greater than that in the control group (p = 0.042). In the case group, 14 (31.8%) patients reported lower genital tract infections; in the control group, only 4 (3.0%) patients reported lower genital tract infections (p = 0.000). A greater number of patients had ruptured OE cysts in the case group than in the control group (9.1% vs. 1.5%, p = 0.016), as revealed by ultrasound results and surgical findings. The numbers of women with diabetes mellitus in the case and control groups were 5 (11.4%) and 3 (2.3%), respectively (p = 0.037). Dysmenorrhea was diagnosed in a similar proportion of patients in both groups. The differences in hypertension and smoking status between the two groups were not significant. There was also no significant difference in the level of CA125. Finally, multivariate conditional logistic regression analysis ( Discussion OE is a common benign gynecological disease, but a secondary TOA formation is seldom reported. Schmidt et al. reported the incidence of OE-TOA to be 2.15% in 1981 [14]; this was consistent with previous reports that indicated that the incidence of OE-TOA was 2.3% [15]. Of the 5,595 patients with OE in this study, 44 (0.79%) were diagnosed with OE-TOA. The incidence in this study was lower compared to that in previous reports. Although OE-TOA is rare, it is serious and sometimes fatal. This area of study requires our attention, as it has long been neglected. Patients with OE are more susceptible than the general population to TOA [15]. Possible pathogeneses are as follows. (1) OE, which is itself is an immunodeficiency disease, leading to impairment in the ability of the immune system to wade off infections, at which point TOA easily emerges. (2) The OE capsule wall is thin and delicate, making it easy for bacteria to penetrate. (3) At the same time, OE blood content is an ideal culture medium that facilitates bacterial growth [16]. (4) The "bacterial contamination hypothesis" states that the incidence and occurrence of intrauterine microbial colonization and endometritis are significantly higher among women with endometriosis, especially after gonadotrophin-releasing hormone agonist treatment [17]. The result seem to be in contrast with those of Mohamed Mabrouk et al., who affirmed that preoperative hormone intervention can shrink the endometriosis lesion and reduce inflammation through ovarian inactivation. In addition, hormonal therapy can decrease the implantation of endometrial lesions, down-regulate cell proliferation, and increase the apoptosis of endometriosis tissues [18]. We observed a more than fivefold increase in OE-TOA risk after lower genital tract infection, which is in line with reports of previous studies. This may be because the cervical mucosal barrier is impaired during pathogenic microorganism infection; hence, infection can spread along the endometrium to other pelvic organs such as the fallopian tubes and ovaries [19]. This is a classic pattern of spread. According to related studies in the United States and Nordic countries, the pathogenic microorganisms of PID or TOA most commonly identified were Neisseria gonorrhea and Chlamydia trachomatis [20,21]. This is not the case in China. Several domestic studies have indicated low detection rates for both microbes. A new study focusing on next-generation sequencing analysis of cervical mucus indicates that in a variable microbiota, two organisms, Enterobacteriaceae and Streptococcus, are more frequently detected in women with endometriosis [22]. Results of this study show that the microbial detection rate in the lower genital tract was significantly higher in cases than in controls. Furthermore, the most frequent pathogen was Escherichia coli (50%), followed by Mycoplasma genitalium (21.4%) and Gardnerella vaginalis (21.4%). This is partially in agreement with reports of previous studies, emphasizing the need to promptly investigate and effectively treat these infections with appropriate antibiotics. Spontaneous rupture of ovarian endometriotic cysts was found to be a significant contributor to the risk of OE-TOA (OR = 2.572). To the best of our knowledge, rupturing of an ovarian endometriotic cyst as a risk factor for TOA has not been previously evaluated. Spontaneous rupture of an OE cyst is not usually a gynecological emergency. The incidence rate is seldom available in the literature from different countries, and the incidence rates reported in Chinese studies are inconsistent. The disease, characterized by abdominal pain and inflammation [23], is easily misdiagnosed due to its nonspecific clinical features and the lack of knowledge regarding biomarkers for early diagnosis [24]. Through analysis, our data shows that the incidence rates of OE cyst rupture in the case and control groups were 9.1% and 1.5%, respectively. Women with spontaneous rupture of ovarian endometriotic cysts had an increased risk of developing TOA. The exact underlying mechanism of this remains unclear. One possible explanation is that the capsule wall easily ruptures because of bleeding of the cyst and increased pressure during the pre-menstruation and menstruation phases of the menstrual cycle [25]. When the cyst bursts, a chocolate-like fluid pours into the abdominal cavity, leading to peritonitis. In addition, the blood content of an OE cyst is a good culture medium for mixed anaerobic bacteria, aerobic bacteria, and facultative bacterial infections [26]. If treatment is not initiated promptly, this could progress to a much more severe condition such as a TOA. Pelvic abscesses often have tubal changes, however, few studies have been conducted on the relation of endometriosis and tubal alterations. Mohamed Mabrouk et al. evaluated tubal changes in 473 cases of endometriosis [4]. It was found that the change of fallopian tube had nothing to do with the degree of invasion of endometriosis, but was related to the operation history of endometriosis. This paper found that 15.9% of the patients in the case group had the operation history of endometriosis, while only 6.8% of the patients in the control group had the operation history, although there was no statistical difference. Therefore, larger sample data are needed in future investigations. Finally, whether a patient with OE will develop TOA after IVF has been a controversial topic. Several studies propose that IVF and oocyte retrieval are major risk factors for the development of OE-TOA. Moreover, the condition is often more serious in these cases. The above conclusions are supported by the theory that the blood in an endometrioma offers a nutrient-rich culture for bacterial growth after transvaginal inoculation [27]. However, it is thought that the rate of TOA is low in patients suspected of having an ovarian endometriotic cyst after IVF and egg retrieval, even though no such patients have undergone laparoscopic exploration [28,29]. Another view is that endometriosis-related infectious disease may be unrelated to assisted reproductive technology and that once OE-TOA occurs, the best form of intervention is early surgical drainage combined with intravenous antibiotics [30]. The results of our study directly contradict the view that patients with OE are more likely to have TOA after IVF or oocyte retrieval. These patients may benefit from comprehensive disinfection, antibiotic treatment, and ultrasound guidance to avoid intestinal puncture during oocyte retrieval. The limitations of the study include a single-institutional retrospective design and a small mumber of patients. Conclusions In conclusion, we found that IVF was not associated with an increased risk of OE-TOA. The risk factors significantly associated with OE-TOA were lower genital tract infections and spontaneous rupture of ovarian endometriotic cysts. To suppress the formation of OE-TOA and improve prognosis, suspected patients should be provided with prompt treatment, including prophylactic antibiotics (against Escherichia coli) as well as appropriate surgical interventions.
v3-fos-license
2018-04-03T02:05:31.982Z
2015-06-15T00:00:00.000
24231017
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.23.016063", "pdf_hash": "200385e0475411784745c9223eeb5f37b7b6a75e", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41659", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "sha1": "7563880354da1207f21e8af5cab8d9373b4abca9", "year": 2015 }
pes2o/s2orc
Fabry-Perot cavity based on silica tube for strain sensing at high temperatures In this work, a Fabry-Perot cavity based on a new silica tube design is proposed. The tube presents a cladding with a thickness of ~14 μm and a hollow core. The presence of four small rods, of ~20 μm diameter each, placed in diametrically opposite positions ensure the mechanical stability of the tube. The cavity, formed by splicing a section of the silica tube between two sections of single mode fiber, is characterized in strain and temperature (from room temperature to 900 °C). When the sensor is exposed to high temperatures, there is a change in the response to strain. The influence of the thermal annealing is investigated in order to improve the sensing head performance. ©2015 Optical Society of America OCIS codes: (060.0060) Fiber optics and optical communications; (060.2370) Fiber optics sensors; (120.2230) Fabry-Perot. References and links 1. Y. Chen and H. F. Taylor, “Multiplexed fiber Fabry-Perot temperature sensor system using white-light interferometry,” Opt. Lett. 27(11), 903–905 (2002). 2. L.-C. Xu, M. Deng, D.-W. Duan, W.-P. Wen, and M. Han, “High-temperature measurement by using a PCFbased Fabry-Perot interferometer,” Opt. Lasers Eng. 50(10), 1391–1396 (2012). 3. M. S. Ferreira, L. Coelho, K. Schuster, J. Kobelke, J. L. Santos, and O. Frazão, “Fabry-Perot cavity based on a diaphragm-free hollow-core silica tube,” Opt. Lett. 36(20), 4029–4031 (2011). 4. H. Y. Choi, K. S. Park, S. J. Park, U.-C. Paek, B. H. Lee, and E. S. Choi, “Miniature fiber-optic high temperature sensor based on a hybrid structured Fabry-Perot interferometer,” Opt. Lett. 33(21), 2455–2457 (2008). 5. H. Y. Choi, G. Mudhana, K. S. Park, U.-C. Paek, and B. H. Lee, “Cross-talk free and ultra-compact fiber optic sensor for simultaneous measurement of temperature and refractive index,” Opt. Express 18(1), 141–149 (2010). 6. D. W. Duan, Y. J. Rao, W. P. Wen, J. Yao, D. Wu, L. C. Xu, and T. Zhu, “In-line all-fibre Fabry-Pérot interferometer high temperature sensor formed by large lateral offset splicing,” Electron. Lett. 47(6), 401–403 (2011). 7. J.-L. Kou, J. Feng, L. Ye, F. Xu, and Y.-Q. Lu, “Miniaturized fiber taper reflective interferometer for high temperature measurement,” Opt. Express 18(13), 14245–14250 (2010). 8. T. Wei, Y. Han, H.-L. Tsai, and H. Xiao, “Miniaturized fiber inline Fabry-Perot interferometer fabricated with a femtosecond laser,” Opt. Lett. 33(6), 536–538 (2008). 9. Y. J. Rao, T. Zhu, X. C. Yang, and D. W. Duan, “In-line fiber-optic etalon formed by hollow-core photonic crystal fiber,” Opt. Lett. 32(18), 2662–2664 (2007). 10. P. A. R. Tafulo, P. A. S. Jorge, J. L. Santos, and O. Frazão, “Fabry–Pérot cavities based on chemical etching for high temperature and strain measurement,” Opt. Commun. 285(6), 1159–1162 (2012). 11. S. Liu, Y. Wang, C. Liao, G. Wang, Z. Li, Q. Wang, J. Zhou, K. Yang, X. Zhong, J. Zhao, and J. Tang, “Highsensitivity strain sensor based on in-fiber improved Fabry-Perot interferometer,” Opt. Lett. 39(7), 2121–2124 (2014). 12. F. C. Favero, L. Araujo, G. Bouwmans, V. Finazzi, J. Villatoro, and V. Pruneri, “Spheroidal Fabry-Perot microcavities in optical fibers for high-sensitivity sensing,” Opt. Express 20(7), 7112–7118 (2012). 13. J. Villatoro, V. Finazzi, G. Coviello, and V. Pruneri, “Photonic-crystal-fiber-enabled micro-Fabry-Perot interferometer,” Opt. Lett. 34(16), 2441–2443 (2009). 14. D.-W. Duan, Y.-J. Rao, Y.-S. Hou, and T. Zhu, “Microbubble based fiber-optic Fabry-Perot interferometer formed by fusion splicing single-mode fibers for strain measurement,” Appl. Opt. 51(8), 1033–1036 (2012). #237081 $15.00 USD Received 27 Mar 2015; revised 15 May 2015; accepted 27 May 2015; published 9 Jun 2015 © 2015 OSA 15 Jun 2015 | Vol. 23, No. 12 | DOI:10.1364/OE.23.016063 | OPTICS EXPRESS 16063 15. Z. L. Ran, Y. J. Rao, H. Y. Deng, and X. Liao, “Miniature in-line photonic crystal fiber etalon fabricated by 157 nm laser micromachining,” Opt. Lett. 32(21), 3071–3073 (2007). 16. M. Deng, C.-P. Tang, T. Zhu, and Y.-J. Rao, “PCF-based Fabry-Pérot interferometric sensor for strain measurement at high temperatures,” IEEE Photon. Technol. Lett. 23(11), 700–702 (2011). 17. K. Schuster, S. Unger, C. Aichele, F. Lindner, S. Grimm, D. Litzkendorf, J. Kobelke, J. Bierlich, K. Wondraczek, and H. Bartelt, “Material and technology trends in fiber optics,” Adv. Opt. Technol. 3(4), 447–468 (2014). 18 M. S. Ferreira, J. Bierlich, J. Kobelke, K. Schuster, J. L. Santos, and O. Frazão, “Towards the control of highly sensitive Fabry-Pérot strain sensor based on hollow-core ring photonic crystal fiber,” Opt. Express 20(20), 21946–21952 (2012). 19. O. Mazurin, M. Streltsina, and T. Shvaiko-Shvaikovskaia, “Handbook of glass data. Part A: silica glass and binary silicate glasses,” 15 (1983). 20. J. A. Bucaro and H. D. Dardy, “High-temperature Brillouin scattering in fused quartz,” J. Appl. Phys. 45(12), Introduction Optical fiber sensors based on Fabry-Perot (FP) cavities, due to their inherent characteristics, are suitable for many different applications, such as in sensing of physical, chemical and biological parameters. These structures are easy to produce, are compact and reliable. Besides, the simple configuration ensures stability and ability to be multiplexed [1]. The measurement of physical parameters such as strain and temperature is of most importance in practical applications, in particular when extreme conditions are involved. Many different FP based configurations have been proposed where temperatures up to 1100 °C have been tested [2]. In order to obtain FP based sensors more sensitive to temperature, the cavity is usually formed at the tip of the fiber. These structures can be created by fusion splicing different types of fibers [3][4][5], by splicing two sections of single mode fiber (SMF) with a large lateral offset [6], or even by fabricating a FP modal interferometer in a fiber taper probe [7]. These configurations usually present reduced dimensions, of the order of hundreds of micrometers. When the FP cavities are formed between two sections of fiber, they become less sensitive to temperature, with sensitivities lower than 1 pm/°C. The structures can be formed by femtosecond laser ablation [8], by splicing a short section of hollow core photonic crystal fiber [9] or a chemically etched multimode fiber section [10] between two sections of SMF. Another possibility is to form an air cavity, like a bubble or a spheroidal cavity, inside the fiber. There are different ways to fabricate the structures through splicing. For example, the splice of two sections of SMF [11,12], SMF spliced to index guiding photonic crystal fiber [13] or splicing a flat and hemispherical tip of SMF [14]. Due to their configuration and low thermal sensitivity, these FP cavities become highly attractive to measure other parameters such as strain. However, the sensors proposed in these works were characterized to strain and temperature separately. The measurement of strain at high temperatures using FP cavities was proposed by Ran et al in 2007 [15]. An etalon in photonic crystal fiber was fabricated using 157 nm laser micromachining. Strain measurements were carried out over temperatures as high as 800 °C. The fabrication of an air bubble cavity by splicing a multimode photonic crystal fiber to a SMF has also been proposed [16]. In this case, strain measurements up to 1850 με were performed in a temperature range between 100 °C to 750 °C. In this work, a Fabry-Perot cavity based on a new silica tube design is proposed. A number of sensors are fabricated and tested to strain and temperatures as high as 900 °C. Given the low temperature sensitivity, this configuration is also subjected to strain at different temperatures. Besides, the effect of the annealing is also analyzed. The proposed sensor exhibits features that translate into a different solution for the measurement of strain in harsh environments. Experimental results All components of the silica tube used in this work were manufactured from high purity silica Heraeus Suprasil® F300. The first procedure was to define the sintering of the four rods (diameter 1.2 mm) inside the cladding tube (with and outer diameter of 6 mm and inner diameter of 4 mm) in exact orthogonal positions, using a modified chemical vapor deposition (MCVD) glass working lathe and one hydrogen-oxygen torch. The temperature was adjusted so that a lateral homogenous peripheral sintering was achieved. An elliptical deformation of the outer cladding tube and collapse was prevented by selecting a preferably low temperature and short lateral heating zone (ca. 1 cm). The preform was drawn to the final fiber by pressurized drawing with a constant temperature. The pressure inside the preform was varied between 1000 Pa and 3000 Pa above atmospheric pressure to shift the cavity size inside the fiber. It was drawn to a diameter of 125 µm and coated with single layer UV acrylate. The effect of increasing the pressure on the cavity cross section is shown in Fig. 1 [17]. From the different silica tubes drawn, the one shown in Fig. 1(b) was used as the sensing element. The silica tube presented a cladding with a thickness of ~14 μm, a hollow core and four small rods positioned in diametrically opposite directions. Each rod has a diameter of ~20 μm and has a reinforcement effect in the structure. This matter will be discussed later on this paper. The Fabry-Perot (FP) cavity shown in Fig. 2 was produced by splicing a short section of the silica tube between two sections of standard single mode fiber (SMF). In order to prevent the collapsing of the structure in the splice region, both splices were done in the manual program of the splice machine (Fujikura FSM-60S) and the fibers were placed with a lateral offset regarding the electric arc discharge area. Thus, the discharge was mainly produced in the SMF region. Notice that the FP cavity did not present coating, as it was removed during the fabrication process. The sensors were easy to manufacture and reproducible. The strain and temperature measurements were performed in reflection, by connecting the broadband optical source, the sensing head and the optical spectrum analyzer to an optical circulator, according to the scheme in Fig. 2. The optical source had a bandwidth of 100 nm, centered at 1570 nm and the measurements were done with a resolution of 0.02 nm. Several sensing heads were produced, with different cavity lengths. The spectrum of each sensor, presented in Fig. 3, is the result of a two wave interferometer. The two reflections occur at the interfaces between SMF and the silica tube. The effective refractive index, n eff was estimated according to Eq. (1): where L FP corresponds to the length of the FP cavity, λ 1 and λ 2 are the wavelengths of two adjacent fringes and Δλ = λ 2 -λ 1 is the free spectral range. The value obtained for the n eff was of ~1.00, which means that all light travels inside the hollow core. In a first stage, the sensing heads were attached to a translation stage with a resolution of 0.01 mm and strain measurements were performed at room temperature. The experimental data, shown in Fig. 4(a), exhibit a linear behavior of the wavelength shift with the applied strain. As expected, the sensitivity to strain depends on the cavity length: smaller cavities translate into more sensitive devices. According to ref [18], the response of the sensor depends not only on the FP cavity length, but also on the total length over which strain is applied. In this study, the total length, L T , was of 73.5 cm and it was kept constant for all the strain measurements. Sensitivities of 13.9 pm/με, 6.0 pm/με, 4.6 pm/με and 3.5 pm/με were respectively obtained for the 17 μm, 51 μm, 70 μm and 198 μm long sensing heads. The nonlinear behavior of the strain sensitivity with the cavity length has been described in detail in [18]. The 198 μm long sensing head was placed inside a tubular oven, with the FP cavity positioned at its center. The fiber was kept straight but loose, without any tension. The sensor was subjected to a temperature variation of ~900 °C and its response is shown in Fig. 4(b). The experimental data was well adjusted to a linear fitting and a sensitivity of 0.85 pm/°C was attained, which indicates that this sensor has a cross sensitivity of ~0.18 με/°C. Besides, for this sensor thermal sensitivity, and considering a wavelength of 1547.14 nm, one gets Since this FP cavity presents such low temperature sensitivity, it is worthwhile to study its behavior when strain is applied in extreme temperature conditions. The 70 μm long sensing head was placed in a tubular oven and on the outside the fiber was fixed to a translation stage. The temperature was increased from room temperature (~22°C) to 750°C in steps of 150 °C. From 750 °C to 900 °C the steps were of 50 °C. At each temperature step the setup was stable for 30 minutes and the fiber was kept straight with a slight tension. After that time, strain measurements were done, by increasing the tension in the fiber up to 1000 με (up curves in Fig. 5), and decreasing it back to its initial state (down curves in Fig. 5). Until 600 °C, the behavior was nearly the same and the sensitivities obtained when increasing strain were similar as when decreasing it. However, from 750 °C on, the sensor showed higher sensitivity as strain increased, indicating that such high temperature has the effect of reducing the Young modulus of the silica tube, also associated with a certain level of induced plasticity, as indicated by the fact interferometric fringes do not return to the original wavelength values when the strain is decreased to zero (it is observed a red shift of ~1 nm, resulting from this a reduction of the strain sensitivity during the step of diminishing the applied tension). The annealing effect was studied by subjecting the 51 μm long sensing head to a temperature of 900 °C for 7 hours (see Fig. 6). There was a total wavelength shift of 4.4 nm throughout this period of time. In the first 40 minutes the variation was of ~0.1 nm/min. After that time, the wavelength shift became slower and from 4 hours to 7 hours the change was of ~3 pm/min. The oven was then switched off and cooled down until it reached room temperature. The same procedure as for the 70 μm long sensing head was then carried on for this FP cavity, and the results are depicted in Fig. 7. In this case, the difference between increasing and decreasing the applied strain at high temperatures was not as notorious as in the previous experiment. The small difference at 900 °C can be due to the fact that the annealing was not fully performed. It is also interesting to observe how the strain sensitivity changes with temperature. In all the sensors tested the strain sensitivity decreased as temperature increased up to 600 °C, increasing again afterwards (Fig. 8). This effect can be attributed to two reasons: the nonlinear variation of the thermal expansion of silica as temperature arises [19], as well as the variation of the photoelastic constant of the silica tube with this parameter. The photoelastic constant is essentially determined by the Pockel's coefficient, p 12 , which exhibits a maximum at 600 °C [20], translating into a minimum in the strain sensitivity. Nevertheless, the difference between applying strain or reduce it is much more significant when no annealing occurred. The silica tube design used in this work proved to be mechanically stable in harsh conditions, such as strain at extreme temperatures. Its behavior was also compared with a FP cavity based on a pure silica tube fabricated under the same conditions. In this case, the silica tube had a hollow core diameter of ~57 μm with a total cross-section area of ~2552 μm 2 . Regarding the new design, its hollow cross section area was of ~5981 μm 2 . Two sensors, fabricated with a length of ~1.2 mm, were subjected to strain until rupture. The sensor based on the silica tube without rods was able to measure strain up to 1500 με, with a linear sensitivity of 2.18 pm/με. Regarding the sensor with the new silica tube design, it was possible to measure strain up to 2500 με with a linear sensitivity of 3.39 pm/με. Due to the presence of the internal rods in this design it is not possible to perform direct comparisons of its mechanical characteristics with those associated with a standard silica tube. Anyway, the relevant topic to emphasize is that the new design shows favorable sensing properties both in what concerns strain sensitivity and mechanical resilience, indicating its adequacy for application in harsh environments. Conclusions A Fabry-Perot sensor was proposed to measure strain at high temperatures. The cavity was formed by splicing a new advanced silica tube between two sections of standard single mode optical fiber. The new hollow core silica tube presents a thin cladding supported on its inside by four small rods with diametrically opposed positions. The presence of the four rods ensures a higher mechanical stability besides a higher sensitivity to strain, when compared to a traditional silica tube. Different sensors were produced by varying the cavity lengths, and subjected to strain and temperature variations. The 17 μm long sensor presented a sensitivity of 13.9 pm/με. The sensitivity to temperature of this configuration was below 1 pm/°C, with the thermal expansion of silica being the dominant effect. Since this configuration presented low sensitivity to temperature, it was considered to be a good candidate to measure strain at extreme temperatures. However, when strain was applied at temperatures above 750 °C, the sensitivity increased as the fiber was being tensioned. When the fiber returned to its initial state, without strain applied, the sensitivity decreased to a value similar to the one found at lower temperatures. This difference was reduced through thermal annealing before subjecting the sensing head to strain at extreme conditions. Furthermore, the strain sensitivity decreased as temperature increased up to 600 °C, where it reached a minimum. Above 750 °C it was observed an increase of the strain sensitivity.
v3-fos-license
2016-12-22T08:44:57.161Z
2016-07-16T00:00:00.000
1729577
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/s12866-016-0772-x", "pdf_hash": "76b98788524ffe452cd5ac618b8a75ddfa97a5d4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41660", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "2329a5597f4d913d35c84c1328fd6d5bf2db081a", "year": 2016 }
pes2o/s2orc
E2 multimeric scaffold for vaccine formulation: immune response by intranasal delivery and transcriptome profile of E2-pulsed dendritic cells Background The E2 multimeric scaffold represents a powerful delivery system able to elicit robust humoral and cellular immune responses upon systemic administrations. Here recombinant E2 scaffold displaying the third variable loop of HIV-1 Envelope gp120 glycoprotein was administered via mucosa, and the mucosal and systemic immune responses were analysed. To gain further insights into the molecular mechanisms that orchestrate the immune response upon E2 vaccination, we analysed the transcriptome profile of dendritic cells (DCs) exposed to the E2 scaffold with the aim to define a specific gene expression signature for E2-primed immune responses. Results The in vivo immunogenicity and the potential of E2 scaffold as a mucosal vaccine candidate were investigated in BALB/c mice vaccinated via the intranasal route. Fecal and systemic antigen-specific IgA antibodies, cytokine-producing CD4+ and CD8+ cells were induced assessing the immunogenicity of E2 particles via intranasal administration. The cytokine analysis identified a mixed T-helper cell response, while the systemic antibody response showed a prevalence of IgG1 isotype indicative of a polarized Th2-type immune response. RNA-Sequencing analysis revealed that E2 scaffold up-regulates in DCs transcriptional regulators of the Th2-polarizing cell response, defining a type 2 DC transcriptomic signature. Conclusions The current study provides experimental evidence to the possible application of E2 scaffold as antigen delivery system for mucosal immunization and taking advantages of genome-wide approach dissects the type of response induced by E2 particles. Background The E2 system is a delivery vehicle in which antigenic determinants are inserted on the surface of an icosahedral scaffold formed by the acyltrasferase component (E2 protein) of the multienzyme pyruvate dehydrogenase (PDH) complex from Geobacillus stearothermophilus [1,2]. E2 naturally serves as a docking unit for other large PDH subunits. This property makes it an excellent scaffold for the presentation of N-terminal fused heterologous antigens. The scaffold can be refolded in vitro to produce pure or chimeric particles similar to virions in size and complexity, it is able to confer high immunogenicity to the displayed determinants, and it is suitable for vaccine formulations [3][4][5][6]. Various protein domains, such as the HIV-1 Envelope (Env) V3 loop, can be assembled into lipopolysaccharide (LPS)-free E2 recombinant vaccines, and we previously demonstrated that their systemic administrations are able to elicit potent binding antibodies and T-cell responses in mice, as well as autologous neutralizing antibodies in rabbits [7]. A shared feature of many pathogens is that the infection occurs or initiates at a mucosal surface. While systemic vaccination offers protection against pathogens such as polio and influenza viruses, induction of mucosal immunity is required for effective protection against pathogens such as HIV, human papillomavirus, herpes viruses, Vibrio cholera and the Mycobacterium species [8][9][10][11][12][13]. Antibodies patrolling the mucosal epithelium appear to play a crucial role in blocking HIV-1 mucosal challenge [14]. Therefore, it is necessary to develop adequate mucosal vaccination protocols for this type of infection. Understanding the immunological mechanisms of vaccination is of paramount importance for the rationale design of a vaccine. Recently, the use of systems biology approaches gave an important contribution to elucidate the fundamental mechanisms by which the innate immune system orchestrates protective immune responses triggered by vaccination [15][16][17]. High-throughput sequencing can be applied to explore cell transcriptome and to identify differences in gene expression and alternative splicing. In vaccinology, gene expression patterns induced by priming antigen presenting cells (APCs) could be used to predict antigen immunogenicity and the type of T-helper cell polarization, and/or help to select the appropriate adjuvant to administer in the vaccine formulation. As proof-of-concept, here we report the immunogenicity of E2 scaffold in a model of mucosal vaccination, and show that intranasal administration of E2-based vaccines is able to induce mucosal and systemic immune responses. Through RNA-Sequencing (RNA-Seq) analysis, we attempt to identify the molecular signatures that could account for E2-primed immune responses, and analyze the gene expression profile of bone marrow-derived dendritic cells (BMDCs) pulsed with E2 scaffold compared to un-pulsed cells. Mucosal vaccination with E2-based vaccines Female BALB/c (H-2d MHC) mice, 6 to 12 week-old, were purchased from Charles River Laboratory (Lecco, Italy) and housed under specific pathogen free conditions at the Animal Facility of the Institute of Food Science of C.N.R., Avellino, Italy (accreditation no. DM.161/99). Groups of five BALB/c mice were intranasally immunized with 50 μg of E2wt, Env(V3)-E2, or Env(V3)-E2 with 1 μg/ mouse of cholera toxin (CT) adjuvant (Sigma Aldrich), at weekly intervals for 7 weeks. The intranasal (i.n.) administration was carried out by inoculating 20 μl of antigens to the nostril. Fecal pellets were collected at weekly intervals, weighed, homogenized at a concentration of 10 mg/ml in 1× phosphate buffered saline (PBS), 0.01 % sodium azide (Sigma Aldrich), and centrifuged at 10,000 g for 10 min to remove debris as previously described [20]; supernatants were recovered and stored frozen. One week after the final boost, mice were sacrificed to collect spleen and serum samples. ELISA assay for detection of IgA and IgG antibodies Analysis of fecal IgA, serum IgA and IgG antibodies (IgG, IgG1, and IgG2a) were performed by in house ELISA as previously described [20] using the synthetic HIV-1 SF162 V3 peptide, V3 scrambled peptide, Env(V3)-E2 or E2wt proteins as antigens. Briefly, 96-well ELISA plates (MaxiSorp TM , NUNC, NY) were precoated with 100 μl/well of 2 μg/ml of E2 antigens or V3 synthetic peptides in 1× PBS and incubated overnight at 4°C. Plates were washed 4 times with PBS containing 0.05 % Tween-20 (Sigma Aldrich) and blocked by adding 200 μl/well of PBS containing 2 % bovin serum albumin (Sigma Aldrich) at room temperature (RT) for 2 h. After the incubation, plates were washed 4 times with washing buffer. Two-fold serial dilutions of sera (starting at 1:100 and ending to 1:12,800) and 100 μl of soluble fecal extracts were added to the plates and incubated at RT for 2 h. All samples were tested in duplicate. After washing, plates were incubated with 1:2,000 dilution of peroxidaseconjugated rabbit anti-mouse IgA, IgG, IgG1, or IgG2a antibodies (Santa Cruz Biotechnology, Dallas, USA). Results were expressed as absorbance values after blank subtraction or as reciprocal endpoint titer of the last dilution exhibiting OD 450 ≥ 0.12. The highest dilution tested was 1:128,000. Serum titers of <1:100 were considered negative and were reported in figures with an arbitrary titer value of 1:10. Levels of total IgA antibodies in soluble fecal extracts (100 μl of 10 mg/ml) were quantified in duplicates using the RayBio® Mouse IgA ELISA Kit, according to the manufacturer's instructions. The Env(V3)-specific IgA antibodies were normalized by dividing the specific OD 450 readings of IgA by the total IgA concentration in ng/ml [21], measured in fecal samples at week 4 for each animal from Env(V3)-E2 and Env(V3)-E2 + CT groups. BMDCs, RNA-Seq library production, sequencing and data analysis Bone marrow-derived dendritic cells (BMDCs) were generated according to previously described protocols [22]. Briefly, bone marrow cells were collected by flushing tibias of C57BL/6 mice (Charles River Laboratory) with complete medium (RPMI 1640, 50 μM 2-mercaptoethanol, 1 mM sodium pyruvate, 100 U/ml penicillin, 100 μg/ml streptomycin, 10 % FCS). Cells were seeded in bacteriological Petri dishes in complete medium supplemented with 200 U/ml of recombinant murine granulocyte-macrophage colony stimulating factor (GM-CSF, Peprotech, NJ, USA) for 7 days. Immature cells were harvested and incubated or not for 1 more day with 50 μg/ml of LPS-free E2wt antigen at a final concentration of 2.5 × 10 6 cells/ml. Total RNA was extracted from untreated or E2-pulsed BMDCs using Tri Reagent (Sigma Aldrich) according to manufacturer's protocol. RNA integrity was assessed as described in Costa et al. [23]. Paired-end libraries (100 x 2 bp) prepared using TruSeq RNA Sample Preparation Kit (Illumina Inc., San Diego, CA) were sequenced on Illumina HiSeq2000 platform. About 200 million paired-end reads were sequenced. Quality was assessed using FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc/). TopHat version 2.0.10 [24] was used to map reads on reference mouse genome (mm9) and RefSeq mouse transcripts annotation with default parameters. More than 95 % of sequenced reads uniquely mapped to mm9. Such reads were used for further analyses. Coverage files were produced using BED-Tools and loaded on UCSC Genome Browsers to analyze gene-specific features. Gene expression was measured using Cufflinks 2 [25]. Cuffdiff and CummeRBund [26] were used to identify differentially expressed genes using normalized expression values (FPKM, Fragments Per Kilobase of transcript and Million of mapped reads). An arbitrary threshold of 0.05 FDR (False Discovery Rate) and 1 FPKM in at least one condition was used to filter out differentially expressed genes. Gene ontology and pathway analysis were performed using DAVID (Database for Annotation, Visualization and Integrated Discovery) [27]. Statistics Statistical analyses were performed using the unpaired two-tailed Student's t-test. Differences were considered statistically significant when P < 0.05. Dissection of fecal antibody responses after mucosal immunization To determine whether HIV-1 epitopes displayed on the surface of E2 scaffold could elicit antigen-specific mucosal immune responses, the V3 loop of HIV-1 Envelope gp120 glycoprotein was expressed as an N-terminal fusion to the catalytic core domain of E2 and purified according to previously described methodologies [7]. The HIV-1 clade B primary isolate SF162 (Tier 1) V3 portion (46 aminoacids total) inserted into the E2 scaffold contained the epitope (G 312 PGR 315 ) recognized by the human anti-V3 monoclonal neutralizing antibody (NmAb) 447-52D and the H-2d restricted CTL (cytotoxic T lymphocyte) epitope (I 311 GPGRAFYA 319 ). The resulting vaccines were nonreplicative multimeric particles formed by HIV-1 antigens inserted on the surface of E2 60-mer scaffold protein, and we evaluated their immunogenicity as mucosal vaccines in mice. To this end, BALB/c mice of the H-2d haplotype (n = 5 per group) were intranasally immunized with Env(V3)-E2, administered with or without the adjuvant cholera toxin (CT). Vaccinations were conducted weekly for seven weeks. Fecal pellets were collected after each immunization. Mucosal IgA responses were measured against E2wt (Fig. 1a), the synthetic HIV-1 SF162 V3-specific peptide, corresponding to residues 301-322 of the V3 loop presented on the E2 surface, and V3 scrambled peptide (Fig. 1b). Figure 1a shows the mean absorbance values for antibodies obtained in feces against the E2 scaffold reaching a plateau after 4 weeks of treatment with the Env(V3)-E2 vaccines in presence or absence of CT. After the first immunization, all vaccinated mice generated anti-carrier E2-specific responses that were enhanced following subsequent administrations (Fig. 1a) and sustained throughout the course of the experiment. No statistically significant differences (P > 0.05) in the absorbance values of E2-binding antibodies were observed between groups of immunized mice, suggesting that E2 scaffold is also immunogenic at mucosal sites in the absence of added adjuvant. Concerning V3-specific antibody responses, Env(V3)-E2 + CT group showed at week 7 significantly higher levels of V3-binding antibodies compared to Env(V3)-E2 group (P = 0.0051) (Fig. 1b). This difference remains significant after normalization of the Env(V3)-specific IgA responses to whole IgA concentration (P = 0.0262, Fig. 1c). Thus, in contrast to antibody responses to the E2 carrier, adjuvant was essential to induce detectable antigenspecific mucosal immune responses. Overall, this experiment demonstrates that intranasal administration of E2-based vaccines is able to induce anti-Env antigen-specific antibodies in fecal samples. Analysis of systemic antibody and CD4 T cell responses Since the strongest V3-specific IgA response was observed following inoculation of Env(V3)-E2 administered with adjuvant, we focused our interest on this group (Env(V3)-E2 + CT) for additional experiments. The E2 wild type group (E2wt) was included to compare the responses generated against the carrier. We assessed the systemic antibody response against the E2 scaffold and Fig. 1 Mucosal antibody responses. BALB/c mice (n = 5) were intranasally immunized with: Env(V3)-E2 particles administered with (triangle) or without (square) CT adjuvant. At weekly intervals fecal pellets were collected to determine the presence of E2-binding antibodies and Env-specific IgA. Naïve mouse group (N.I., n = 5, diamonds) was used as control of background response. Fecal anti-E2 and anti-V3 IgA antibodies were measured by coating in ELISA assay (a) E2wt protein or (b) synthetic HIV-1 SF162 V3 peptide (filled triangles or squares), and scrambled peptide (empty triangles or squares). A representative experiment out of two is shown. Graphs show the mean absorbance values (± S.D.) of fecal samples from mice of each group at the indicated time points; statistical significance was determined using the unpaired two-tailed Student's t-test, P value is reported. (c) Specific levels of IgA antibodies to V3 peptide as defined by ELISA OD 450 readings normalized to whole IgA concentration in soluble fecal extracts from mice immunized with Env(V3)-E2 particles administered with (triangle) or without (square) CT adjuvant the Env(V3)-E2 protein in serum samples by ELISA. All animals developed high titers (~10 4 ) of anti-carrier E2 specific IgG antibodies (Fig. 2a). Mice immunized with Env(V3)-E2 particles developed significantly higher titers of Env(V3)-E2 binding antibodies compared to the E2 wild type control group (Fig. 2a). To specifically detect antibodies raised against the 447-52D core epitope we used the V3 synthetic peptide as antigen (Fig. 2b and c), and found that Env(V3)-E2 particles were able to elicit serum IgG (Fig. 2b) and IgA (Fig. 2c) V3-binding antibodies. These results indicate that intranasal delivery of E2 is able to elicit mucosal and systemic antibody responses directed toward the 447-52D core epitope of the HIV-1 V3 loop region. In order to examine the type of immune responses induced in vivo by E2 vaccination following the intranasal route, we analyzed the isotype ratios of serum IgG1/ IgG2a and evaluated the induction of antigen-specific CD4 + T cells in vaccinated mice. We first determined the isotype of anti-V3 antibodies in sera collected at the end of vaccination. Intranasal administration of Env(V3)-E2 based vaccines induced a preferential increase of IgG1 (Fig. 3a and c) against the V3 peptide over the IgG2a isotype ( Fig. 3b and c), suggesting a polarization of CD4 + T cells toward the Th2 subset, consistent with findings previously reported by us [5,28]. We next evaluated whether mucosal administration of Env(V3)-E2 induced Th2 cells. To this end, we characterized the type of the cellular immune response by measuring cytokine production from splenic CD4 + T cells. Env(V3)-E2 particles elicited antigen-specific CD4 + T cells able to produce the Th2 cytokine IL-4 (Fig. 3d). However, following the intranasal vaccination regimen, we also detected IFN-γ production in CD4 + T cells purified from vaccinated mice (Fig. 3e). Analysis of V3-specific CD8 T-cell response Since CD8 + T cells control the replication of highly pathogenic immunodeficiency viruses [29,30], we also investigated the antigen-specific CD8 + T-cell response following the intranasal route. One week after the final boost, isolated splenocytes were analysed for IFN-γ production after 6 days of in vitro stimulation with V3pulsed LPS-blasts. In Fig. 4 we reported the percentage values of V3-specific IFN-γ producing CD8 + T cells (Fig. 4a) and representative dot plots of ICS (Fig. 4b). Intranasal administration of Env(V3)-E2 particles was able to induce V3-specific CD8 + T cells that produced IFN-γ. Analysis of transcriptome profile of E2-pulsed BMDCs We used genome-wide transcriptional approach to identify gene expression signatures induced in antigen presenting cells by E2 vaccine, to understand and elucidate the molecular mechanisms that account for the E2-primed immune responses. In detail, we performed RNA-Sequencing analysis on mouse immature BMDCs cultured with LPS-free E2wt particles to explore the impact of E2 scaffold on the transcriptional profile of BMDCs. Comparing the transcriptome of un-pulsed BMDCs with the transcriptome of E2-treated cells, we observed a substantial deregulation of gene expression after E2 treatment (more than 5300 genes with FDR < 0.05). In particular, a significant up-regulation was measured for 2984 genes upon E2 stimulation (Fig. 5a). Statistically significant molecular pathways perturbed in E2-pulsed BMDCs were defined by mapping up-regulated genes to KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways using the Database for Annotation, Visualization and Integrated Discovery [27]. Such analysis revealed that E2 exposure triggers the up-regulation of genes associated with immune response, such as the "Chemokine signaling" and the "Jak-STAT signaling" pathways, and the expression levels of selected genes involved in these pathways are illustrated in Fig. 5b. Interestingly, RNA-Seq data revealed a significant up-regulation of genes correlated with type-2 polarized DCs (green asterisks in Fig. 5a and b). Among them, a significant transcriptional activation was measured for genes encoding Janus Kinase 2 (Jak2 gene), the Signal Transducer and Activator of Transcription 5, STAT5 (Stat5a and Stat5b genes), the transcription factor Nf-kB (Nfkb1), the Th2 chemokine CCL17 (Ccl17 gene), the Notch ligand Jagged-1 protein (Jag1 gene) and Interferon regulatory factor 4 (Irf4 gene). Notably, an opposite expression trend was observed for genes encoding proteins related to Th1 cell polarization. In particular, interferon signaling genes such as Stat1 and Irf7, and chemokine and interleukin encoding genes -including Cxcl10 and Il15 -were significantly down-modulated after BMDC exposure to E2 compared to untreated cells (red asterisks in Fig. 5a and b). Since the expression of Irf4 has been positively correlated to a transcriptional activation of Th2 polarization-related genes [31], we investigated the expression level of these genes. In particular, RNA-Seq analysis revealed that Itgam (integrin alpha M), Pdcd1lg2 (programmed cell death 1 ligand 2), and Ciita (class II, major histocompatibility complex, transactivator) genes were up-regulated in DCs exposed to E2. Furthermore, in E2-pulsed BMDCs, Il12b, Il6 and other genes encoding receptors of innate immune systemsuch as Nlrp3 (NOD-like receptor family, pyrin domain containing 3), Tlr2 (toll-like receptor 2) and Il7r (interleukin 7 receptor) genes -were up-regulated. In addition, the Fig. 3 Isotype analysis and T helper cell responses. a-c Graphs show the reciprocal endpoint titers of anti-V3 isotypes from mice (n = 5) intranasally immunized with Env(V3)-E2 + CT or E2wt. a IgG1 and b IgG2a serum anti-V3 antibodies. Naïve mouse group (N.I., n = 5) was used as control of background response. c Anti-V3 IgG1 (gray bars) and IgG2a (white bars) endpoint titers for each animal from Env(V3)-E2 plus CT group. Lines represent median values; statistical significance was determined using the unpaired two-tailed Student's t-test; P values are reported. d Ex vivo production of IL-4 and e IFN-γ by purified CD4 + T cells isolated from mice immunized as above, in response to spleen cells pulsed with no antigen (no ag), E2wt, or Env(V3)-E2. Cytokines were detected by ELISA assay; statistical significance was determined using the unpaired two-tailed Student's t-test; P values are reported. A representative experiment out of two is shown expression levels of genes encoding the co-stimulatory molecule CD40 and the Notch2 signaling components (Notch2, Rbpj, Mib1) increased when BMDCs were stimulated with E2 (Fig. 5b). It should be emphasized that the Notch2-RBPJ signaling receptor has been described to control functional differentiation of DCs and specify a unique CD11b + DC subset required for efficient T cell priming [32]. These results suggest that E2 scaffold may activate DCs through innate immune receptor (PRRs, pattern recognition receptors) and provide signals that program DCs to prime Th2 cells, through the expression of a Th2-type DC signature. Discussion Specific immune responses are characterized by different patterns of cytokines produced by CD4 + T cells, an event known as polarization of helper T cells and there can be striking differences in the type of response preferentially stimulated by different carriers and/or adjuvants. In the context of a vaccine formulation, it is important to dissect the polarization induced by the E2 carrier in order to understand how to best deliver and eventually combine immunogens. We previously described that subcutaneous administration of E2 particles expressing the Env(V3) hypervariable region elicited a sustained immune response by inducing both antibodies and antigen specific T cells [7]. Here we report that mucosally administered Env(V3)-E2 were also able to elicit mucosal and systemic V3-specific humoral immune responses. Due to the limited sample volume obtained from mice we were not able to assess the neutralization activity of the antibodies induced by E2 immunization. However, it should be emphasized that preclinical studies of HIV-1 vaccine candidates have typically shown that neutralizing antibodies (NAbs) with a long heavy chain complementarity-determining region 3 (HCDR3) are found in non-human primate and rabbit animal models, with the latter being currently the favorite model for testing candidates at eliciting NAbs. We previously reported that our Env(V3)-E2 construct was able to induce autologous NAbs when administered subcutaneously in rabbits, with a modest level of neutralization of some Tier 1 viruses [7]. For this reason we plan to assess in future work using larger animal models the induction of neutralizing antibodies by E2 mucosal immunization. Here we also report that intranasal administration of Env(V3)-E2 particles elicits antigen-specific splenic CD8 + and CD4 + T cells. The IgG1/IgG2a ratio in the sera and the ability of Env-E2 specific CD4 + T cells to produce IL-4 suggest that a Th2-type of immune response is induced in mice also after mucosal immunization. Th2-type CD4 Tcell polarization has been reported to favor mucosal IgA responses [33]. Indeed, we observed the presence of V3specific IgA in feces and anti-V3 IgG in sera of vaccinated mice following E2 mucosal administration. Since the contribution of monomeric IgA, essentially transudated serum IgA, to total IgA in fecal samples has been previously found negligible in comparison to the mucosally secreted dimeric IgA (generally less than 0.1 %) [34], we can assume that the IgA in feces from vaccinated mice were mainly of the secretory type. However, we cannot exclude the presence of monomeric IgA. In contrast to previous results obtained by subcutaneous administration [5,28], here, we observed the ability of antigen-specific CD4 + T cells to produce IFN-γ. These data indicate the presence of Th2 and Th1-polarized T cells following mucosal immunization in agreement with reports demonstrating a Th1 switch in mucosally primed mice [35]. However, from our data, we cannot exclude that the production of these two cytokines may depend on the same cells. Using an in vitro system, we measured the ability of E2 scaffold to induce gene expression changes in antigen Representative dot plot analysis of IFN-γ ICS from a single mouse in each group with the percentage values of IFN-γ positive cells indicated in the upper right corner of each square. Statistical significance was determined using the unpaired two-tailed Student's t-test; P value is reported. A representative experiment out of two is shown presenting cells. We profiled the entire transcriptome of BMDCs stimulated with the E2 scaffold, showing a significant up-regulation of genes associated with Th2polarizing DC capability [36]. Notably, we observed a significant up-regulation of the gene encoding the Notch ligand Jagged 1 (encoded by Jag1). Its expression in DCs has been previously described to support Notch-mediated Th2 differentiation [37]. In addition, we reported the upregulation of Stat5 and Nfkb1, whose expression is required within DC in promoting Th2 immunity [38,39], and of Irf4, known to regulate the transcription of key cytokines involved in the immune response [40]. Irf4 expression in DC has been very recently described to play a role in the initiation of the Th2 cell response [41,42]. Previous findings also revealed that Irf4 expression is significantly correlated with the expression of CD11b (encoded by Itgam), programmed death ligand-2 (encoded by Pdcd1lg2) and MHC class II major histocompatibility complex transactivator (encoded by Ciita) [31]. In agreement with this study, we report here the up-regulation of these genes in E2-stimulated DCs. The results obtained from the transcriptome analysis by RNA-Seq for E2-pulsed BMDCs confirm, and further strengthen, our previous observations and findings. Indeed, we have reported that subcutaneous vaccination of C57BL/ 6 mice with E2 polarizes the immune response towards a high IgG1/IgG2a isotype ratio and induces IL-4 producing CD4 + T cells, even in Th1-prone strains [28]. We also observed this type of polarization when subcutaneous administration of E2 particles was performed in the absence or in presence of various types of adjuvants, such as the strong reactogenic CFA (Freund's Complete Adjuvant) or in formulation suitable for human use, as alum (Allhidrogel) or squalen-based oil in water emulsion (Addavax) [43]. However, when we administered E2 with another carrier, namely the filamentous bacteriophage fd which we know to prompt Th1 polarization, in a priming-boosting strategy, we demonstrated a shift towards a mixed Th1/Th2 type of polarized response [28], indicating that it is also possible to modulate the Th2-type of the immune response induced by E2 administration. E2 particles free of endotoxin, administered subcutaneously in the absence of added adjuvants, are also immunogenic, although the addition of adjuvants increases the magnitude of the response [7]. However, in this case of mucosal administration we could not detect antigenspecific immune response in the absence of adjuvant administration. Indeed, RNA-Seq analysis indicates that Heatmap showing the fold change in gene expression for selected DE genes in E2-pulsed BMDCs vs untreated cells. Green asterisks include Th2 and MHC-II related genes; red asterisks include Th1-associated genes endotoxin-free E2 particles are able to activate in DCs pathways of immune response as the chemokine and the Jak-STAT signaling pathways. Moreover, the upregulated expression of Il12 and Il6 was found, while the expression levels of genes encoding other proinflammatory cytokines did not increase upon E2 stimulation. These levels of inflammatory response may explain why E2 particles free of endotoxin are immunogenic when administered subcutaneously in the absence of added adjuvants. However, it is not sufficient to prime the immune response in the absence of adjuvants in the case of mucosal delivery. We previously have reported that Env(V3)-E2 particles were much more immunogenic when co-administered with other systems such DNA expression vectors, encoding Env proteins, delivered intradermally (via Gene gun) in rabbits [7]. On the basis of these findings and of the known role played by Th2 cells in activating and enhancing B cell proliferation and humoral immune responses [44], in order to increase the mucosal immune response we plan, in future work, to test the mucosal delivery of E2 particles in combination with other strongly immunogenic delivery systems. Conclusions In the current preclinical study we showed that E2 scaffold is immunogenic when administered via mucosal route in the presence of adjuvant and thus may represent a relevant option for immunization against pathogens whose infections occur or initiate at a mucosal surface. Moreover, by using a genome-wide transcriptional approach, we attempted to dissect the mechanisms by which DCs may orchestrate the immune response following vaccination with the E2 scaffold.
v3-fos-license
2021-10-15T15:55:00.122Z
2022-01-01T00:00:00.000
241655113
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.techscience.com/iasc/v31n3/44836/pdf", "pdf_hash": "8ba46d859d51340b3054aad69a27f3b862495939", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41661", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b60e5d4f1bd31398077caecef474eb0e4acdba05", "year": 2022 }
pes2o/s2orc
Lyapunov-Redesign and Sliding Mode Controller for Microprocessor Based Transfemoral Prosthesis Transfemoral prostheses have evolved from mechanical devices to microprocessor-based, electronically controlled knee joints, allowing amputees to regain control of their limbs. For improved amputee experience at varying ambulation rates, these devices provide controlled damping throughout the swing and stance phases of the gait cycle. Commercially available microprocessor-based prosthetic knee (MPK) joints use linear controllers, heuristic-based methods, and finite state machine based algorithms to track the refence gait cycle. However, since the amputee experiences a variety of non-linearities during ambulation, such as uneven terrains, walking backwards and climbing stairs, therefore, traditional controllers produces error, abnormal movements, unstable control system and require manual-tuning. As a result, novel controllers capable of replicating and tracking reference gait cycles for a range of reference signals are needed to reduce the burden on amputees and improve the rehabilitation process. Therefore, the current study proposes two non-linear control techniques, the Lyapunov-redesign controller and the sliding mode controller for real-time tracking of various signals, such as walking on level ground at a normal speed and ambulation on uneven terrains. State-space model of MPK was developed along with the mathematical modelling of non-linear controllers. Simulations and results are presented using MATLAB to verify the ability of proposed non-linear controllers for constantly and dynamically tracking and maintaining desired motion dynamics. Furthermore, for selected reference signals, a linear controller was applied to the same mathematical model of MPK. During tracking of reference angel in case of general gait cycle, an accuracy of 99.95% and 99.96% was achieved for sliding mode controller and Lyapunov-redesign controller respectively. Whereas, for the same case, linear controller had an accuracy of 95.5% only. Therefore, it can be concluded that the performance of non-linear controllers was better than their linear counterparts while tracking various reference signals for microprocessor based prosthetic knee. This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Intelligent Automation & Soft Computing DOI:10.32604/iasc.2022.020006 Article ech T Press Science Introduction Microprocessor-based prosthetic knee (MPK) joints provide relief to patients suffering from traumatic effects of amputation. MPK provides better and more promising results than mechanical counterparts, such as enhanced balance, less emphasis on walking mechanics, and less energy input from amputees in controlling these devices [1,2]. Motivation With the rapid advancement of above-knee prosthesis, it is necessary to provide a cost-effective solution to reduce the high cost of MPKs [3,4]. Various control strategies, such as linear quadratic regulator (LQR) [5], PID and fuzzy logic control [6], have been endorsed in the literature for MPK joints spanning from C-leg [7] to rheo-knee [8], with each displaying some viable outcomes and a few limitations [3]. All commercial MPKs therefore employ a finite state machine-based approach to control MPKs, and state-of-the-art machine techniques. While these techniques increase the cognitive burden of the amputee according to the literature, they cause unnatural movements [9]. As a result, novel controllers capable of robustly replicating and tracking reference gait cycles are needed to reduce the burden of amputees, improve the rehabilitation process, and give them confidence. Related Work Alzaydi et al. [6] developed a fuzzy-based walking controller to track the behavior of healthy limbs during normal walking. Herr et al. [10] developed MPK based on magnetorheological damper (MRD) and designed an open-loop controller utilizing a finite state machine based technique to detect various phases of the gait cycle for controlling the resistance of the knee joint. The authors in [11] developed MRD based prosthetic knee and proposed a combination of computed control law and PD control law to track the angle of the knee joint in real time. In [12], the authors proposed a running controller for the actively powered prosthesis to enable amputees to run with a bio-mechanically appropriate gait. Lawson et al. [13] proposed a piece-wise controller for detecting various phases of the gait cycle to control the torque and damping of the knee joint. Park et al. [14] developed a knee prosthesis that can operate in active or semi-active modes controlled through polynomial prediction function. Jung et al. [15] proposed a semi-active type MRD based prosthetic knee joint controlled through a linear current controller. The authors in [16] discussed that for a continuous phase controller manual tuning is cumbersome, therefore, extremum seeking controller (ESC) capable of simultaneously tuning the feedback control gains of a powered knee prosthesis was developed. Various experiments were performed to verify the effectiveness of the proposed ESC across different walking speeds. Chang et al. [17] developed a data-driven model to track lower limb cadence trajectory and estimate the uncertainties by utilizing past input-output data. In [18] a model-free adaptive control method to estimate the angel of knee joint for applications such as exoskeleton and MPK was developed and simulated in MATLAB. The results indicated that the controller has an acceptable accuracy while tracking the trajectory of the knee joint. The authors in [19] developed a model-free robust adaptive controller capable of handling known non-linearities in robotic applications. Simulations suggested that tracking performances of the proposed autotuned adaptive gain is better than manually tuned constant gains. Problem Statement The efficacy of linear controllers and data-driven models has been successfully demonstrated in the literature. However, because the amputee encounters various non-linearities during ambulation, such as irregular terrains and changes in loading conditions, linear controllers may result in error and unstable control systems. This is due to the fact that linear controllers lack formal guarantees (when applied to non-linear systems) and require hand-tuning [20,21]. Contribution and Organization The main objective of this study was to develop robust non-linear controllers for MPKs to address the limitations of linear controllers. The proposed non-linear controller will not only aid amputees in levelground walking but will also allow for self-selected ambulation on uneven terrains. Furthermore, the proposed controllers resulted in adapting amputee's walking pattern and gait cycle according to ideal gait cycle. The chosen MPK's physical model included intrinsic sensing capabilities for detecting the actual phase of the gait cycle [22]. A state-space model of MPK was developed, and non-linear controllers were used to track the reference gait cycle. MATLAB simulations and results are presented to validate the ability of proposed nonlinear controllers to track and maintain desired motion dynamics constantly. Furthermore, the results are compared with the linear controller for selected reference signals. Lyapunovredesign controller achieved an accuracy of 99.96% and 98.90% while tracking knee angle for a general gait cycle and partial gait cycle for uneven terrains, whereas, SMC had an accuracy of 99.95% and 98.70% while tracking the same reference signals. Contrarily, fine-tuned linear PID demonstrated an accuracy of 95.5% and 94% only, therefore, it can be concluded that non-linear controllers outperform linear PID while tracking reference signals for MPK. The remainder of the article is organized as follow: Section 2 describes the state-space modelling of the system, which is followed by the design of nonlinear controllers in Section 3. Section 4 presents the findings and discussions, whereas Section 5 presents the conclusion and future considerations. State-Space Model of the System Schematic of MPK consisting of flexion/extension of knee joint is presented in Fig. 1. The angular displacement and angular velocity of knee joint are important control parameter in MPKs as tracking of gait cycle is dependent on these two variables. The state-space model for MPK presented in Fig. 1 is developed to apply and validate non-linear controller strategies. State-space representation is actually a mathematical model of a physical system as a set of input, output and state variables related by differential equations [23]. The proposed control strategy shown in Fig. 2 consists of a non-linear controller that takes the angular position and angular velocity of the knee joint as input, followed by a single-link manipulator based MPK as a dynamic model whose angular position and velocity are to be tracked according to the reference gait cycle. To demonstrate the efficacy and robustness of the proposed non-linear controller to external disturbances, noise is also introduced as an input to the control mechanism. For developing a state-space model of the system, the following variables are assumed: h are the current angular position and angular velocity, whereas ref 1 and ref 2 are the desired angular position and angular velocity. Furthermore, e 1 and e 2 are errors between current and desired angular position and angular velocity, respectively. Hence, the dynamic model of the knee joint as discussed in [24] can be written as: The complete state-space model of the MPK after simplification can then be described as: where g, m, l, b, u, I, and φ(t) are gravitational acceleration, mass of the joint, length from the knee joint to foot, damping constant, input torque of the actuator, inertia of knee joint and external disturbances respectively. Design and Stability Analysis of Non-Linear Controllers Based on the state-space model discussed in Eq. (7), two different non-linear controllers, namely Lyapunov-redesign and SMC, were designed and their stability analyses were performed to ensure that MPK performed as designed because if the system deviates from the desired behavior, it may lead to gait instability and dissatisfaction of the amputee. The development of the Lyapunov-redesign controller for MPK is discussed in the following section. Design of Lyapunov-Redesign Controller Lyapunov-redesign Control is based on the energy function, and the stability of a given non-linear system is determined by the time rate of change of the energy function. According to the Lyapunov Figure 2: Control mechanism for microprocessor-based prosthetic knee stability criterion, if the rate of change of the energy function is zero or negative definite, the system is converging and stable [25]. From Eqs. (3) and (4), it could be written that As, from Lyapunov-redesign control theory when the controller is stable, the error converges as discussed in [26] and can be formulated as: where K 1 is the controller gain, and its value must be positive to keep the controller stable. By substituting the values of _ x 1 and _ x 2 in Eqs. (8) and (9), simplification yields: Now, by comparing Eq. (10) with (12) By rearranging Eq. (13) and solving for u where, u is the input torque (control input) that may be controlled with the help of Eq. (14), leading to track the concerned variable such as angular displacement and angular velocity to their desired reference signals. Stability of Lyapunov-Redesign Controller Even if the system is efficiently controlled, unstable Lyapunov-redesign control can result in unbounded system output that does not provide the desired ouputs. As a result, controller stability is as important as controller design [26]. The following steps can be used to test the Lyapunov-redesign controller's stability: Supposing a semi-positive definite energy function Computing time derivative of the energy function If the rate of change of energy function is negative or equal to zero, then the controller is stable, whereas if the rate of change of energy function is positive, then the system is deemed to be unstable [26]. To select an optimal candidate for the Lyapunov-redesign energy function in order to test the stability of the controller, a semi-positive definite energy function having mass (m) and length (l) of MPK as key parameters are assumed. By taking time-derivative of Eq. (15), V 1 can be simplified as: Substituting Eq. (10) in (16) and simplification yields: In Eq. (17), m, l and K 1 are always positive with the added fact that the square of e 2 also results in a positive value. Thus, _ V 1 0 concludes that, since, the rate of change of the energy function is either zero or negative, then according to the Lyapunov function stability analysis, the proposed controller is stable. Design of Sliding Mode Controller The Sliding Mode Controller (SMC) is a nonlinear control method that modifies the dynamics of a nonlinear system by applying a discontinuous control signal (or, more precisely, a set-valued control signal) that causes the system to slide along a cross-section (sliding surface) of the system's normal behavior [27]. SMC is well suited to nonlinear systems containing disturbances and uncertainties, such as MPK. SMC is a technique in which the control inputs are alternating between two limits [27]. The desired performance of closed-loop dynamics is expressed at s = 0, which happens through a proper design of a controller. Let us assume a sliding surface: where a, s and e 2 are the coefficient of the controller, sliding surface and error of angular velocity, respectively. By taking the time derivative of Eq. (18), it can be written as: s is the rate of change of the sliding surface. By substituting the value of _ e 2 from Eq. (10) in Eq. (19), it may be written as: where as from the theory of SMC, it could be analyzed that when the system attains stability, the error converges and system slides on the supposed sliding surface. Thus from the theory of the SMC [27], the rate of change of sliding surface could also be formulated as: where α and γ are the controller coefficients used to control the rate of convergence and chattering (disturbance) [27] and K 2 is the controller gain which must be positive to ensure the controller is stable. Furthermore, sgn is the signum function used to ensure an ideal switching. By comparing Eq. (20) with Eq. (21), it could be written as: By solving for u Eq. (21) yields: Stability of Sliding Mode Controller Similar to the Lyapunov redesign controller it is essential to exercise stability of SMC. Let us assume a semi-positive definite energy function that includes the sliding surface parameter: By taking the time derivative of Eq. (24), it can be written as: Substituting Eq. (21) in Eq. (25) yields: By simplifying Eq. (26) _ V 2 can be written as whereas jγj = γ as γ > 0. Therefore, Eq. (27) can be simplified as As K 2 > 0, so Eq. (28) will result in a negative value which concludes that the developed SMC for MPK is stable. Results and Discussion Simulations in MATLAB have validated the developed Lyapunov-redesign and sliding mode controller is capable of successfully tracking various reference signals in MPK. Tab. 1 summarises key parameters of MPK such as the mass of knee joint (m) and length of joint (l) [28]. Results for a sinusoidal signal, normal walking gait cycle, and specific partial gait cycles have been plotted and compared with linear controller to acquire a realistic view of the developed controllers. Sinusoidal Reference Signal The reference signal to be tracked for an MPK varies frequently with time. Therefore, to test the adaptability and robustness of the proposed controllers, the authors have implemented them on a sinusoidal signal which modifies its behavior at every instant. The hit and trial approach was used to finetune the parameters of the linear controller (PID), Lyapunov-redesign, and SMC as described in Tab. 2 to follow the reference sinusoidal signals, and the obtained results are plotted in Fig. 3. Fig. 3a, it may be observed that linear and the non-linear controllers produced comparable results while following angular position as fine-tuning of the controller parameters were performed. Similarly, Fig. 3b presents that the linear controller produced slightly poor results as compared to SMC and Lyapunov-redesign controller for an angular velocity reference signal due to the inherited system nonlinearities. A comparison of three controllers in terms of error plot for angular position and angular velocity is presented in Figs. 3c and 3d respectively from where it can be analyzed that both non-linear controllers produced comparatively better results than PID controller. General Gait Cycle The Lyapunov redesign and SMC were then implemented on the actual normal walking gait cycle, after their validation on the sinusoidal signal. The reference angular position of the knee during normal walking presented in Fig. 4 is adopted from [29], whereas, reference angular velocity can be obtained by taking the time derivative of reference angular position. The parameters tuned for individual controllers in the previous section were used, as in real-life it is not possible to recalibrate controller parameters for individual reference signals. From Fig. 4a, it can be concluded that both nonlinear controllers achieved steady state in a shorter time and produced comparable, while the linear controller had relatively poor results because of ripples and a long transitional period. Additionally, control input (torque) and error difference between the desired (reference) and actual outputs have also been plotted in Figs. 4b and 4c respectively. This helps to analyze that both nonlinear controllers produce comparatively better results than linear controller. Lyapunov-redesign controller tracked knee angel with an accuracy 99.96%, whereas, SMC had a comparable accuracy of 99.95%. However, linear controller was the least accurate with only 95.50% accuracy. Partial Gait Cycles for Knee Extension/Flexion at Normal Walking Speed This subsection validates non-linear controllers for the knee extension and flexion phase for a normal speed walking, whereby the reference signals for angular position and angular velocity has been adopted from [30,31]. In Figs. 5a-5c it can be analyzed that the linear controller produced poor results while tracking the referral signal, since it had significant steady-state error, undershoot and overshoot. SMC and Lyapunov redesign controllers were, by contrast, relatively fast in their stability and had no large ripples over the simulation cycle, making it more suitable both for partial and complete gait cycles. Furthermore, in Fig. 5d, error difference between reference and actual outputs is shown, indicating the comparatively better results for both non-linear controllers. While tracking angular position Lyapunov-redesign and sliding mode controller achieved an accuracy of 99.20% and 98.80% respectively, whereas, linear controller was only 92.20% accurate. Partial Gait Cycle for Knee Extension/Flexion at Uneven Terrains Lyapunov-redesign and SMC have been validated for both knee flexion and knee extension for uneven terrain. The reference angular position and velocity were obtained from [30,31], Fig. 6 shows and compares the results for uneven terrain. Nonlinear controllers gave good results over uneven terrains, as illustrated in Figs. 6a and 6b. When compared to both SMC and Lyapunov-redesign controllers, PID failed to precisely track the reference, resulting in a large steady-state error. The developed non-linear controllers reached steady-state values more quickly, whereas PID lagged at first exhibited wobbles and large ripples, proving PID is inappropriate for uneven terrains, as discussed in the literature [28,29]. In addition, in Figs. 6c and 6d the control input (torque) and the error difference between the reference and actual output have been shown for easy comparison between non-linear controllers and the PID controller. An accuracy of 98.90% was observed for Lyapunov-redesign controller, whereas, SMC was 98.70% accurate. However, PID only achieved 94% accuracy proving that it is inappropriate for non-linear systems. Conclusion Due to advancements in the field of bio-mechatronics, lower-limb implants, particularly transfemoral prostheses, are quickly progressing. Microprocessor-based knee prostheses perform better and are more viable than mechanical knee prosthesis, which provide little assistance to amputees. Several control techniques have been reported in the literature, including finite-state controllers, optimal controllers, and PID controllers. However, there are certain key shortcomings that need to be addressed. This study presents non-linear controllers for a single-link manipulator based MPK, such as the Lyapunov-redesign controller and the Sliding Mode Controller, to address the limitations of linear controllers. The suggested non-linear control techniques have been validated for a variety of reference signals, including the general gait cycle, sinusoidal reference signal, and partial gait cycle for natural speed in both knee-flexion and extension phase. The results are plotted in MATLAB and compared to a linear PID controller, confirming that non-linear controllers outperform PID controls for all reference signals. Eventually, for microprocessor-based prosthetic knee joints, nonlinear controllers are the way of the future. In addition, the present study provides a platform for implementation of non-linear controllers through hardware-inloop technique. Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
v3-fos-license
2023-02-08T06:17:50.752Z
2023-02-07T00:00:00.000
256628638
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-1858543/latest.pdf", "pdf_hash": "79facba1bf3062212d8b59d24c3ba606b2ffd8f4", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41662", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "2ee4b350643bcc40d8aa0b15d90ec932cfcebcce", "year": 2023 }
pes2o/s2orc
Disordered network structure and function in dystonia: Pathological connectivity vs. adaptive responses Primary dystonia is thought to emerge through abnormal functional relationships between basal ganglia and cerebellar motor circuits. These interactions may differ across disease subtypes and provide a novel biomarker for diagnosis and treatment. Using a network mapping algorithm based on resting-state functional MRI (rs-fMRI), a method that is readily implemented on conventional MRI scanners, we identified similar disease topographies in hereditary dystonia associated with the DYT1 or DYT6 mutations and in sporadic patients lacking these mutations. Both networks were characterized by contributions from the basal ganglia, cerebellum, thalamus, sensorimotor areas, as well as cortical association regions. Expression levels for the two networks were elevated in hereditary and sporadic dystonia, and in non-manifesting carriers of dystonia mutations. Nonetheless, the distribution of abnormal functional connections differed across groups, as did metrics of network organization and efficiency in key modules. Despite these differences, network expression correlated with dystonia motor ratings, significantly improving the accuracy of predictions based on thalamocortical tract integrity obtained with diffusion tensor MRI (DTI). Thus, in addition to providing unique information regarding the anatomy of abnormal brain circuits, rs-fMRI functional networks may provide a widely accessible method to help in the objective evaluation of new treatments for this disorder. Introduction Dystonia is a brain disorder characterized by sustained muscle contractions resulting in involuntary twisting movements and abnormal postures (1, 2). While frequently sporadic, primary dystonia has been identified with a variety of genetic mutations (3)(4)(5). Of these, the most common are the DYT1 (TOR1A) and DYT6 (THAP1) mutations, which are inherited as autosomal dominant traits with incomplete penetrance. Primary dystonia has often been viewed as a disorder of the basal ganglia, although considerable evidence points to widespread involvement of abnormal functional networks connecting these structures to the cerebellum, thalamus, and motor cortex (6)(7)(8)(9). Indeed, spatial covariance analysis of [ 18 F]fluorodeoxyglucose (FDG) PET scan data (10), which maps afferent synaptic activity in brain regions (11,12), has revealed a reproducible dystonia-related metabolic pattern involving the putamen, cerebellum, and supplementary motor area (SMA) in patients with inherited as well as sporadic forms of the disorder (13,14). A limitation in the wide application of PET-based network mapping approaches has been restricted availability and patient concerns regarding radiation doses. Recently, we developed a non-invasive method to identify and validate disease networks in resting-state functional MRI (rs-fMRI) scan data in conjunction with independent component analysis (ICA) and bootstrapping resampling (15)(16)(17). This method is available on standard clinical MRI commonly used for neuroradiological exams. The ability of rs-fMRI to detect disease networks has been validated against spatial covariance analysis with FDG PET (16,17), including the ability to partition the corresponding graphs into core and periphery zones (18), and to map node-to-node connections within and between modules (19,20). Indeed, this approach has been used to examine the effects of genotype, disease progression, and treatment on functional connectivity in Vo et al. 4 the network space (19)(20)(21). Here, we used rs-fMRI to characterize an abnormal hereditary dystonia-related pattern (H-DytRP) in clinically manifesting (MAN) DYT1 or DYT6 patients. Given that these genes are only 30-50% penetrant (3-5, 13, 22), we additionally measured H-DytRP expression in clinically non-manifesting (NM) mutation carriers, sporadic dystonia (SPOR) patients, and healthy control (HC) subjects. Likewise, we identified an analogous sporadic dystonia-related pattern (S-DytRP) in rs-fMRI scans from SPOR and used the results to evaluate similarities and differences in the topographies of the two networks and in their respective relationships to dystonia severity ratings. Lastly, we used a graph theory-based approach to determine whether the internal structure of the disease network differed for the various groups. In a recent study of Parkinson's disease PD networks, we found that pathological and adaptive connectivity responses were associated with specific changes in graph metrics within key modules (20). By applying the same strategy to dystonia, we discerned analogous connectivity patterns in patients and clinically unaffected gene carriers. This distinction provides insight into the developmental mechanism that underlies dystonia, as well as the design of new treatment strategies for this and related disorders. Hereditary dystonia-related pattern (H-DytRP) To identify a significant disease network associated with hereditary dystonia, we analyzed rs-fMRI scans from manifesting DYT1/DYT6 mutation carriers (MAN) and age-matched healthy control (HC1) subjects using an ICA-based algorithm (see Methods). Patients and control subjects were distinguished by expression levels for three of the 40 ICs that were identifiable in Vo et al. 5 the data (Fig. 1A), corresponding to the cerebellum (IC18), the basal ganglia and thalamus (IC9), and sensorimotor cortex and occipital association regions (IC10). The estimated coefficients (weights) on the individual components (Fig. 1B) were used to combine the individual topographies into a composite disease network, which we termed the hereditary dystonia-related pattern (H-DytRP). The salient regions that comprise this network ( Fig. 2A) are presented in Table 1. Expression levels for this network (Fig. 2B, left) were elevated in MAN patients (red bar) relative to the HC1 subjects (gray bar) used for network identification (p<0.001; permutation test, 5000 iterations). Prospectively computed network expression (Fig. 2B, right) was likewise increased in sporadic dystonia (SPOR) patients (black bar) relative to healthy control subjects (gray bar) (p=0.03; post-hoc test). H-DytRP expression was also increased in NM mutation carriers (blue bar) compared to healthy control values (p<0.001; post-hoc test). Significant differences in network expression were not observed across the MAN, NM, and SPOR groups (p=0.52; one-way ANOVA). Sporadic dystonia-related pattern (S-DytRP) We used an analogous strategy to identify a sporadic dystonia-related pattern (S-DytRP) in rs-fMRI scans from SPOR patients and HC1 subjects, but a reliable frequency histogram could not be identified for IC selection. Nonetheless, the same three ICs (IC18, IC9 and IC10) were also detected by the algorithm when it was applied to the combined MAN and SPOR patient group. Indeed, a significant S-DytRP was identified based on the same ICs as those in the H-DytRP derivation, although the weights on two of these differed for the two dystonia networks ( Fig. 1B and S3A). Specifically, the distribution of weights on IC18 (cerebellum) (left panels) was different for H-DytRP and S-DytRP (Jensen-Shannon divergence (JSD)=0.25), with a larger Vo et al. 6 coefficient (represented by the mean value of the distribution; vertical dashed lines) on the former network. The weight distributions on IC10 (SMC/OCC) (right panels) also differed (JSD=0.26), but with a larger coefficient on the latter network. By contrast, the weight distributions on IC9 (basal ganglia/thalamus) (middle panels) were similar for the two networks (JSD=0.02), with nearly identical coefficients. Thus, using the same ICs as H-DytRP but with revised coefficients, we identified a candidate S-DytRP that accurately discriminated SPOR from HC1 (p<0.005; permutation test, 5000 iterations). As expected, voxel weights on the two dystonia networks were strongly correlated (r=0.89, p<0.001; voxel-wise correlation, corrected for spatial autocorrelation), as were corresponding expression levels across the study population (r=0.91, p<0.0001; Fig. 2C). Moreover, as with H-DytRP, S-DytRP expression ( Fig. S3B) was elevated in MAN, NM, and SPOR compared to healthy control values (p<0.015, post-hoc tests). Clinical correlates of network expression H-DytRP expression in MAN and SPOR patients (n=20) correlated with Burke-Fahn-Marsden Dystonia Rating Scale (BFMDRS) motor ratings obtained at the time of imaging (R 2 =0.36, p=0.005; linear regression). Diffusion tensor imaging (DTI) measurements of fractional anisotropy (FA) in a prespecified subrolandic white matter (SWM) volume also correlated with motor ratings (R 2 =0.29, p=0.015) in this group. Indeed, we found that a 2-variable model based on H-DytRP expression and SWM FA in combination predicted motor ratings more accurately (R 2 =0.62, p=0.0003) and with greater generalizability than 1-variable models based on each predictor alone (AIC=109 for the 2-predictor model vs. 117 and 120 for 1-predictor models based on H-DytRP expression or SWM FA, respectively). These relationships are depicted graphically for each predictor (Fig. 3A, B), and for both together (Fig. 3C), as partial correlation Vo et al. 7 leverage plots (see Methods). Of note, similar results were also obtained using expression values for S-DytRP, rather than H-DytRP, as the network predictor (Fig. S3C). Changes in functional connectivity We next identified the functional connections gained in the network space for each dystonia group relative to the healthy subjects (Tables S2-S4). Given the similarity of the H-DytRP and S-DytRP topographies shown above, connectional analysis focused on the former network. To evaluate the organizational structure of the H-DytRP, we calculated the gain and loss of functional connections linking its component nodes. Gain: The gain in connections in the H-DytRP space (Fig. 4A, left panel) was similar for the MAN, NM, and SPOR groups (F(2,297)=0.34, p=0.71; one-way ANOVA). To delineate group differences in connectional gain within specific network subspaces, we partitioned the H-DytRP into core and periphery subgraphs using a modularity maximization algorithm ( Fig. 4B; see Methods). The core included the cerebellum, thalamus, putamen, and the left precentral and perirolandic areas, while the periphery was composed mainly of prefrontal, superior temporal, and parietal cortical regions. We found that the distribution of gained connections in the core and periphery (Fig. 4A, middle panel) differed for the MAN, NM, and SPOR groups (F(2,594)=194, p<0.0001; group × subgraph interaction effect). Indeed, group differences in connectional gain were more pronounced within the core compared to the periphery. In this subgraph (black outline), more connections were gained in MAN compared to either SPOR or NM (p<0.0001; post-hoc tests). We note that in MAN, NM, and SPOR, connectional gain was greater outside compared to inside the core (Fig. 4A, right panel). This tendency differed significantly across the groups (F(2,297)=137, p<0.0001; one-way ANOVA), with greater relative gain outside the Vo et al. 8 core in SPOR and NM compared to MAN (p<0.0001; post-hoc tests). The individual connections gained in the MAN, NM, and SPOR groups are summarized in Tables S2A, S3A, and S4A. MAN and SPOR, the two clinically affected groups, displayed abnormal core-core connections linking the right thalamic and putamen, as well as coreperiphery connections linking the left middle frontal gyrus and precentral cortex, and the right Rolandic operculum and angular gyrus. (In the left hemisphere, abnormal connections linking the latter two regions were gained in MAN but not SPOR.) By contrast, MAN and NM, the two gene positive groups, exhibited abnormal gain in core-core connections linking the Rolandic operculum and the putamen bilaterally, and in core-periphery connections linking the left middle and inferior opercular frontal regions. Notably, gain in connections linking nodes in the cerebellar vermis and lateral hemisphere was present in all three groups. That said, gain in connections linking the thalamus to other H-DytRP core nodes, the putamen and Rolandic operculum in particular, was seen in MAN and SPOR patients, but not in NM gene carriers. Loss: Loss of normal H-DytRP connections was also noted (Fig. 5A, left panel), which was most pronounced in NM carriers compared to MAN or SPOR patients (p<0.001; post-hoc tests). Connectional loss in the core and periphery of the network (Fig. 5A, middle panel) differed across the groups (F(2,594)=142, p<0.0001; group × subgraph interact effect). In contrast to connectional gain in the core (black outline), which was most pronounced in MAN, loss of connections in this module was greater in NM (p<0.001; post-hoc tests). Indeed, the NM group exhibited relatively greater connectional gain outside the core, but greater loss inside the core compared to the other groups (Fig. 5A, right panel). The normal H-DytRP connections that were lost in the MAN, NM, and SPOR groups are summarized in Tables S2B, S3B, and S4B. In MAN, loss of core-core connections was limited Vo et al. 9 to those linking the left pallidum to the putamen. By contrast, loss of core-periphery connections involved those linking the left pons and cerebellar vermis, the vermis and right middle frontal gyrus, and the angular and lingual gyri, while cortico-cortical connections in the periphery were also lost in this group. In NM, by contrast, the loss of normal core-core connections was more extensive than the other groups. In particular, connections normally linking the left thalamus and putamen were not present in NM, as were those linking the vermis and right putamen, the left pallidum and thalamus, and the left motor cortex and globus pallidus. A variety of coreperiphery connections, including those linking the cerebellum with the cortical association regions, were also lost in NM, as well as cortico-cortical connections in the periphery. Finally in SPOR, connectional loss was similar to that observed in MAN, with limited involvement of core-core connections such as those linking the cerebellar vermis to the right putamen, and left motor cortical regions and the globus pallidus. More extensive loss was noted for core-periphery connections such as those linking the cerebellar vermis with the inferior frontal operculum, and the right thalamus with frontal, parietal, and occipital association regions, as well as for corticocortical connections within the periphery. In aggregate, connectional gain in the core was greatest in MAN and least in NM and SPOR. In the same subgraph, however, loss was greatest in NM but only minimal in MAN and SPOR. Altered graph metrics in the H-DytRP core zone To determine the effects of the connectivity changes on network function in dystonia, we quantified graph metrics in specific H-DytRP subgraphs. Given that group differences were greatest in the core, further analysis focused on this subgraph. The results for each of the metrics are summarized in Table 2. Core degree centrality (Fig. 6A), a measure of overall connectivity Vo et al. 10 in the subgraph, was greater in MAN compared to NM, SPOR, and HC1 (PCORR<0.001). Likewise, assortativity measured in the same subgraph ( Fig. 6B) was increased in MAN compared to NM (PCORR<0.001) and HC1 (PCORR<0.05), but was reduced in NM compared to control (PCORR<0.05). Thus, while core degree centrality was similar for NM and HC1, assortativity was lower in the former group relative to the latter. Regarding the other core metrics, clustering (Fig. 6C) was reduced in MAN relative to HC1 and NM (PCORR<0.001), whereas characteristic path length (Fig. 6D) was increased in MAN relative to the other groups (PCORR<0.001). NM, by contrast, exhibited increased clustering (PCORR<0.001) without significant difference in characteristic path length in comparisons with HC1. These changes are consistent with significant reduction in small-worldness seen in the core (Fig. 6E) for MAN compared to NM and HC1 (PCORR<0.001) and the modest increase in the core (PCORR<0.05) that was present in NM relative to HC1. Graph analysis of the H-DytRP core additionally revealed the presence of a "rich club," i.e., a set of highly interconnected central nodes, within this subgraph, but only in the MAN group (Fig. 6F) and not in the NM, SPOR, or HC1 groups (PCORR<0.001). The rich-club coefficient (see Methods) was normal in the NM and SPOR groups. Reconstructions of the H-DytRP core zone (see Methods) in MAN (Fig. 7A) revealed an abundance of connections linking high degree nodes (yellow-yellow) at the center. The gain in connections in this subgraph (red lines) was largely, but not exclusively, assortative as evidenced by those linking nodes of low degree (blue-blue) as well as high degree (yellow-yellow). In NM ( Fig. 7B), by contrast, several disassortative connections were observed in the core linking high and low degree nodes (yellow-blue); however, links between nodes of similar degree were also observed in this group. Graph visualization did not reveal prominent connectivity differences in Vo et al. 11 the core subgraph of SPOR patients (Fig. 7C); connectional gain was modest in this group as it was in NM. Subgraph reconstruction of the H-DytRP core in HC1 subjects is displayed for reference (Fig. 7D). Discussion In this study, we used a novel machine learning approach to identify dystonia-related functional networks in rs-fMRI data, characterized by contributions from the basal ganglia, cerebellum, thalamus and sensorimotor cortex, as well as frontal and parieto-occipital association regions, which is in agreement with previously reported network abnormalities in dystonia (23)(24)(25). The findings demonstrated striking similarity of disease networks derived from hereditary vs. sporadic dystonia patients, supporting a common nosology. Moreover, given the significant relationship between functional network expression in individual patients and dystonia motor ratings, the rs-fMRI-based disease topography may have utility as an objective imaging biomarker in clinical trials. This approach also allowed for a detailed analysis of the connections (edges) linking network regions (nodes) within and between key modules in the various groups. In addition to identifying commonalities and differences between groups in individual connections, we found distinctive changes in network organization that were consistent with a neurodevelopmental mechanism for the disorder. The results also suggest the possibility of new treatment targets for patients with this disorder. Gain and loss of functional connections in the H-DytRP network As part of the study, we mapped individual functional connections that linked H-DytRP nodes in the MAN, NM, and SPOR groups but not in HC. Abnormal gain in connections linking the cerebellar vermis and hemispheres was observed in all three groups, supporting a major role for this structure in dystonia (7,9,13,26). Certain connections were gained in dystonia gene carriers irrespective of penetrance, i.e., in both MAN and NM, such as those linking the Rolandic operculum with the putamen, and the inferior frontal operculum with the middle frontal gyrus. These abnormal connections were not detected in SPOR, suggesting that dystonic movements are mediated by other pathways in the latter group. That said, other functional connections were present in both MAN and SPOR but not in NM or HC, such as those linking the centromedian thalamus with the putamen, the Rolandic operculum with the angular gyrus, and the middle frontal gyrus with the precentral cortex. Of note, connectivity changes involving these regions have been reported previously in fMRI studies of focal dystonia (27,28). The delineation of abnormal functional connections such as these may help customize targets for therapeutic intervention in individual patients. Indeed, a recent deep brain stimulation (DBS) study found that cervical and generalized dystonia patients benefited clinically from distinct DBS stimulation pathways (25). The study of H-DytRP connectivity in dystonia revealed parallels with recent data on the influence of genotype on the organization of Parkinson's disease networks. Specifically, in both disorders, these effects were most pronounced in core subgraphs defined independently through community detection algorithms (19). Indeed, in MAN, prominent connectional gain was evident within the H-DytRP core, whereas in the other groups, this effect was greater outside this subgraph. The shift of the abnormal connections from the core in MAN to the periphery in SPOR may be clinically relevant. The H-DytRP periphery is composed largely of frontal, parietal, and occipital association regions, and the current data suggest a preponderance of such connections in SPOR patients. In hereditary dystonia, by contrast, connectional gain was more prominent in Vo et al. 13 the core, linking cerebellar, thalamic, striatal, and motor cortical nodes within this module. As noted above, significant group differences in connectional loss were also observed in the core, but these were most pronounced in NM carriers as opposed to MAN and SPOR patients. The meaning of the loss of normal connections in dystonia is less clear. It is tempting to associate reductions in functional connectivity as seen in the core zone of NM with microstructural changes in the integrity of thalamic outflow pathways to the motor cortex and striatum (22,29). The prominent loss of functional connections linking these core regions is therefore consistent with earlier DTI studies conducted in DYT1/DYT6 carriers which demonstrated a close relationship (13,30). Of note, prominent loss of functional connectivity was also present in MAN and SPOR, but unlike NM, these changes were predominantly observed outside the core zone. For example, functional connections in the H-DytRP periphery linked the inferior frontal operculum (BA 44) and the angular gyrus (BA 39) in healthy subjects but were not present in either of the two affected groups. Given that activity in these regions was anticorrelated in healthy individuals, it is possible that this projection normally has an inhibitory function that is lost in dystonia patients. Even so, one cannot determine from the rs-fMRI data alone whether the underlying anatomical connections are intact but functionally deactivated through interactions with other involved nodes or subnetworks, or alternatively whether the two regions are anatomically disconnected on a neurodevelopmental basis. DTI tractography can help disambiguate these possibilities. In this case, we observed a substantial reduction in the number of fiber tracts connecting these regions in MAN compared to HC subjects (31). It is possible that similar changes underlie deficits in higher order functions such as sequence learning and visual motion perception reported in dystonia patients (31)(32)(33). Further multimodal imaging studies with rs-fMRI and DTI will be needed to evaluate these and other non-motor manifestations of Vo et al. 14 the disorder. Pathological and adaptive network configurations in dystonia The changes in functional connectivity seen in dystonia were not limited to preferential gain and loss of links in specific H-DytRP subspaces. However, by quantifying a small number of prespecified graph metrics, we obtained valuable information on the connectivity patterns that characterized the various groups. As with gain and loss, the changes in network architecture denoted by the metrics were most pronounced in the core zone. In MAN, degree centrality and characteristic path length were significantly increased in this module, which was consistent with the substantial gain in connections that was observed. That said, the clustering coefficient, an index of parallel processing, was reduced in the core zone of MAN patients. By the same token, core small-worldness is also reduced in this group, denoting an imbalance between segregation and integration of neural signal and inefficient information transfer through this module. In addition, the core zone in MAN exhibited high assortativity, i.e., the tendency for connections to form between nodes of similar degree centrality (34)(35)(36). This gives rise to relatively homogeneous nodal interactions, a feature of unstable, pathological connectivity responses in disease networks (20). The H-DytRP core also exhibited a rich club, i.e., a set of mutually interconnected high degree nodes, in MAN but not in the other groups. Rich clubs are an important means of facilitating functional integration in normal brain networks (37)(38)(39)(40). They may, however, also signify subspaces with enhanced vulnerability to pathology in neurodegenerative disorders (41,42). The current data suggest that rich club organization may also play a pathological role in neurodevelopmental disorders such as hereditary dystonia. The pattern of functional connectivity recorded in the H-DytRP core was substantially Vo et al. 15 different in NM carriers. In contrast to MAN, core degree centrality and characteristic path length were normal in this group, while the clustering coefficient and small-worldness were both increased in this subgraph. These findings, along with reduced assortativity, i.e., heterogeneity of nodal interactions, are consistent with enhanced adaptive capacity of a network in response to perturbation (43,44). In the context of disease networks, this connectivity pattern suggests a beneficial adaptation, analogous to that observed in PD patients with the slowly progressive LRRK2-G2019S genotype or sporadic PD patients following network rewiring as a consequence of subthalamic gene therapy (19)(20)(21). The distinctive network configurations observed in the H-DytRP core in MAN and NM can be further considered in the context of self-organization criticality, the process in which network performance is optimized to a critical point or a range of configurations (45,46). Selforganized criticality is inherently non-linear (47): network behavior is chaotic and unstable for configurations sampled above the critical point (supercritical range). Below this point (subcritical range), however, the network behavior is quiescent, but performance is suboptimal. Between these extremes lies a narrow range of configurations (critical region), in which network performance is optimized for stable and efficient information processing (48). Recent studies of experimental systems suggest that networks can assume different configurations as criticality develops. Indeed, in dissociated cortical cultures from the early developmental period, self-organized criticality was reached after a stereotyped sequence of maturational steps (46,49,50). According to this model, network development begins in a quiescent state in the subcritical regime. Subsequently, dysregulated supercritical behavior emerges as increasing numbers of connections are formed that give rise to homogeneous nodal interactions. A critical point is reached later, however, after a period of synaptic pruning in Vo et al. 16 which optimal network efficiency is achieved by removing excess connections. One might speculate that this process is incomplete in DYT1/DYT6 carriers: network maturation is arrested in the initial quiescent phase in NM, and in the later uncontrolled phase in MAN. Given that the process of self-organization is itself non-linear (47), transitions from one network configuration to another are unlikely to occur spontaneously, particularly after the system has matured to a certain level. That said, arrest of network development in the supercritical phase, as proposed for MAN, may involve additional features associated with the disease process. For example, the appearance of a set of tightly interconnected high degree nodes in the core, referred to as a "richcore" (39), may enhance vulnerability to extrinsic insults occurring later in brain development (51). Self-organized criticality is also influenced by factors distinct from network mesostructure, such as GABAergic inhibition and plasticity of connections in key modules (46,52,53). The role of each of these variables in an individual patient may depend on genotype as well as environmental factors. Hereditary and sporadic dystonia networks as disease markers The connectivity changes in the H-DytRP space seen in MAN and NM were not necessarily generalizable to SPOR. Firstly, while H-DytRP and S-DytRP topographies are similar, they are by no means identical. The same ICs were selected by the algorithm when it was applied to the MAN patients or to combined MAN and SPOR sample, suggesting that the two networks share a common topographic signature. Even so, contributions from the cerebellum (IC18) were accentuated in the former network and relatively reduced in the latter, while those from sensorimotor and occipital association cortex (IC10) were in the opposite direction. It is likely that the SPOR group, which included patients with focal, segmental, and generalized dystonia, Vo et al. 17 were too heterogeneous for reliable IC selection by our algorithm. It is possible that an S-DytRP identified in a larger, more clinically homogeneous SPOR sample would exhibit a somewhat different topography than currently observed. By the same token, the mesostructure of the H-DytRP and S-DytRP also cannot be viewed as equivalent, in that the nodes that comprise the core and periphery may differ for the two networks. Indeed, in SPOR, the absence of abnormalities in some graph metrics may be explained by a shift in the boundaries of the core zone for the two dystonia networks. Despite these limitations, generating S-DytRP from the ICs identified in the MAN data proved to be a reasonable strategy, given the close relationship of H-DytRP and S-DytRP expression values measured in the two patient groups. Importantly, the presence or absence of motor manifestations in DYT1/DYT6 carriers was not explained by differences in H-DytRP or S-DytRP expression. That said, functional connections linking the thalamus and Rolandic gyrus were seen in MAN, but not in NM. This observation mirrors the earlier DTI studies, which points to penetrance-related differences in the microstructure of cerebellothalamocortical (CbTC) motor pathway in dystonia gene carriers (22,30). Whereas cerebellothalamic fiber tracks, which constitute the proximal segment of the pathway, were reduced in both MAN and NM, thalamocortical projections along the distal segment were reduced only in the latter group. These data accord with the current functional connectivity studies, suggesting that transmission of aberrant cerebellar output to the motor cortex -by way of relatively intact thalamocortical pathways -is essential for clinical penetrance in this population. Despite the microstructural changes in both CbTC segments observed in NM, the cerebellar vermis and the precentral and paracentral gyri can still interact through a set of functional connections that are present in these individuals but not in HC subjects (Fig. 4B, middle; Table S3A). Given that these abnormal connections were identified in Vo et al. 18 NM but not in MAN, they may play an adaptive role in the disorder. The network data may also help explain differences in the severity of motor symptoms among dystonia patients. In an earlier DTI study, we found that clinical dystonia ratings for affected upper limbs were worse in those with more intact structural connectivity (higher FA) in the distal (thalamocortical) segment, in the CbTC pathway measured in a fixed subrolandic white matter (SWM) volume (30) (Fig. S3C). In aggregate, the findings suggest that individual patient differences in dystonia severity can be explained by a combination of dystonia network activity measured over the whole brain and local microstructural integrity measured in the distal CbTC segment. As a functional marker, however, dystonia network expression is likely to be more sensitive to the effects of treatment than DTI measurements. Further studies are needed, however, to validate these measures, singly or in combination, as potential biomarkers of primary dystonia. Limitations Several limitations should be considered in the interpretation of these data. Despite the power of the dystonia networks to detect significant group differences in the testing data, we acknowledge the need for further validation of the respective topographies in larger, multisite training sets. Vo et al. 19 While H-and S-DytRP are topographically similar, we cannot exclude specific subnetworks that may be unique to SPOR patients. In this regard, network mesostructures may also vary across groups, with different core-periphery arrangements and connectivity patterns for MAN-and SPOR-related topographies. As stated above, limiting comparisons to similar clinical phenotypes may also be helpful in this regard. Another limitation was that gain and loss of functional connectivity in the H-DytRP space was not systematically compared with DTI measures of structural integrity along the same pathways. The focus here was on the CbTC motor system, which had previously been associated with penetrance and phenotypic variation, as well as clinical disability. A deeper multimodal analysis is needed to map connectivity changes in nonmotor pathways (31), and to compare network metrics for graphs defined by structural versus functional nodal interactions. Finally, the rs-fMRI approach to network identification can potentially be applied to individual scan data to map the network on a single subject basis. While potentially useful for customized interventions, such individualized reconstructions can be limited by noise. Methods are currently being developed to improve the accuracy of this approach for use in individual patients. Even so, the utility of the general approach in two separate brain disease, Parkinson's disease, which is neurodegenerative, and primary dystonia, which is likely neurodevelopmental, supports its potential value in the study of other CNS disorders. Ten of the patients were manifesting carriers (MAN) of the DYT1 (n=6) or DYT6 (n=4), and 10 Vo et al. 20 had sporadic dystonia (SPOR). We additionally studied 10 age-matched non-manifesting (NM) mutation carriers (4 M/6 F, age 46.4±16.9 years). Twenty healthy volunteers (9 M/11 F, age 47.6 ± 7.8 years) served as controls. The healthy control (HC) subjects were divided into two groups of 10 (HC1 and HC2) based on age and gender. The HC1 subjects (4M/6F, age 46.7 ± 9.0 years) had similar demographics to the dystonia patients; their scans were used in the network identification procedure (see below). Scans from the HC2 subjects (5M/5F, age 48.5 ± 6.6 years) were used for testing. The clinical and demographic features of these patient and control groups are presented in Table S1. All subjects underwent T1-weighted imaging, diffusion tensor imaging (DTI) and rs- Ethical permission for these studies was obtained from the Institutional Review Board of Northwell Health. Written consent was obtained from each subject following detailed explanation of the procedures. Resting-state functional magnetic resonance imaging Subjects were scanned in an awake resting state; no specific task was performed in this condition. Subjects receiving botulinum toxin treatment were scanned 3-4 months after the last (55). The preprocessing included motion correction, brain extraction, spatial smoothing (kernel=8 mm; full width at half maximum) and temporal high-pass filtering (cutoff frequency=1/150 Hz). To evaluate the effects of motion during rs-fMRI, we examined absolute and relative displacements during the scan. There was less than 1.4 mm absolute displacement and less than 0.35 mm relative displacement, in all scans without significant differences between groups (absolute: p>0.50; relative: p>0.43; Student's t-tests) ( Table S1). The resulting fMRI volumes were registered to the individual subject's structural T1 image and then to the standard Montreal Neurological Institute (MNI) 152 template. The rs-fMRI data were intensity normalized to reduce variability and improve the reliability of ICA maps. Identification and validation of dystonia-related patterns Two approaches were employed to identify and validate dystonia-related networks in the rs-fMRI data. In the first, we identified a hereditary dystonia-related pattern (H-DytRP) in scans from manifesting (MAN) dystonia gene carriers and age/gender-matched healthy control subjects (HC1). We then prospectively measured individual H-DytRP expression levels (subject scores) in the sporadic dystonia (SPOR) patients, the non-manifesting (NM) mutation carriers, and in the remaining healthy subjects (HC2). In the second, we identified a sporadic dystonia-related pattern (S-DytRP) in scans from SPOR and HC1 subjects, and prospectively measured expression levels for this network in the MAN, NM, and HC2 groups. The specific computational steps employed for network identification and validation are outlined in Fig. S1 and presented in detail elsewhere (15,17). Vo et al. 22 Step 1: The rs-fMRI data from all 50 participants were analyzed using group-wise spatial ICA (56) with GIFT software (http://mialab.mrn.org/); 50 group independent components (ICs), and associated time courses and power spectra, were obtained. Of these, 10 ICs were discarded because of motion artifacts and/or scanner-related or physiological noise on visual inspection. The procedures for network identification and testing that follow were applied to the remaining 40 ICs. Step 2: Subject spatial maps and temporal dynamics were estimated using dual regression (57). Step 3: Subject scores, representing pattern expressions in individual rs-fMRI scans, were computed for the 40 ICs in each of the patients and control subjects. The subject score of component in subject (denoted ! (#) ) is the scalar projection of the group spatial map for component onto the subject spatial map for component in subject . Step 4: The resulting measures for the training dataset were entered into a logistic regression model with bootstrap resampling (1000 iterations) to identify a subset of ICs whose subject scores best discriminated between the two groups. The weight (coefficient) for each of the selected ICs was estimated through an additional bootstrap resampling procedure (1000 iterations) (15). Step 5: Using the coefficients estimated in Step 4, we linearly combined the selected components to form a composite disease network associated with hereditary or sporadic dystonia, i.e., H-DytRP or S-DytRP. The same coefficients are applied to the corresponding subject scores to quantify expression levels for the respective networks. Steps 1-5 were performed for network identification. The derived H-DytRP and S-DytRP topographies were compared by computing voxel-wise correlation coefficients, which were Clinical correlations of network expression To assess the relationship of dystonia network expression and the severity of clinical manifestations of the disorder, we separately correlated H-DytRP and S-DytRP subject scores with BFMDRS motor ratings in the combined MAN and SPOR patient sample. This was done by computing Pearson product-moment correlation coefficients, which were considered significant for p<0.05. Vo et al. 24 In an earlier study using magnetic resonance DTI in primary dystonia, we found that BFMDRS motor ratings correlated with the microstructural integrity of the distal, i.e., thalamocortical, segment of CbTC motor pathway (30). carriers and HC subjects (Fig. S2) (22,30). This volume was defined by significant bilateral FA reductions in NM compared to either MAN or HC (22). Given that SWM FA was also found to correlate with BFMDRS motor ratings (30), we entered these values, averaged across hemispheres, into linear regression models as an additional predictor of clinical severity. The best quality model, i.e., that with the greatest generalizability as defined by the lowest out-ofsample prediction error, was selected based upon the Akaike Information Criterion (AIC) (61). The relationship between each predictor and motor ratings, independent of variation in the other predictor, was displayed using partial correlation leverage plots (Statistics and Machine Learning Vo et al. 25 Toolbox, MATLAB R2020a), and were considered significant for p<0.05. Connectivity maps in the network space To document the abnormal node-to-node interactions that underlie the dystonia-related functional topographies, we normalized the 3D voxel-wise network displays to MNI space and thresholded the images at T=4.8 (p<0.001). Using the AAL atlas (62), we parcellated the brain into 95 regions-of-interest (ROIs) and identified those corresponding spatially to significant network clusters (18)(19)(20)(21). For each node, we computed mean time series for rs-fMRI scans in the By this scheme, the magnitude of the correlation (|r|) provided a measure of connectivity between network nodes for each group. For a given pair of nodes, group differences in connectivity were described by the absolute difference (|dr|) in the two correlation coefficients (19,21). For the gain of a connection in a dystonia group (i.e., MAN or SPOR) or mutation carriers (i.e., NM) to be significant, we required that the magnitude of the correlation coefficient (|r|) that defined the graphical edge be greater than or equal to 0.6 (p<0.05; Pearson correlation) in that group but not in HC, and that the corresponding absolute difference (|dr|) from HC be greater than 0.4 (p<0.05; permutation test, 1000 iterations). The latter threshold was determined Vo et al. 26 using the HC graph and permuting the regional labels 1000 times to create a set of pseudorandom correlations for each iteration as described previously (19,21). Likewise, the loss of a normal connection in MAN, NM, and/or SPOR was significant if |r| for the graphical edge was greater than or equal to 0.6 in HC but not in the particular disease group, and that the corresponding absolute difference from HC was greater than 0.4. For each edge, the connections that satisfied these criteria were confirmed by bootstrap resampling (100 iterations) using the Statistics and Machine Learning Toolbox in MATLAB R2020a. The resulting connections were considered for further analysis only if: (1) they also conformed to known anatomical pathways determined by diffusion tractography and/or postmortem analysis, and (2) the two nodes were separated by no more than two sequential hops along the graph. Additionally, connections were mapped within and between nodal communities defined according to a graph partitioning algorithm (19). This was done based on a modularity maximization algorithm as described elsewhere (63)(64)(65), utilizing the "core_periphery_dir" function in the Brain Connectivity Toolbox (66). According to this method, the optimal core/periphery subdivision is a partition of the network into two non-overlapping zones, such that the number/weight of within-core edges is maximized while the number/weight of withinperiphery edges is minimized (63,67,68). In this study, we used this approach to partition the dystonia network into core and periphery subgraphs for further analysis. Specifically, we counted the total number of connections gained relative to HC in the network space for the MAN, NM, and SPOR groups. Network connections were further classified according to edge type: corecore, periphery-periphery, or core-periphery. Bootstrap resampling (100 iterations) was used to estimate the mean and standard error of this measure for each group and edge category. Network metrics To evaluate connectivity patterns in network subspaces, we computed the following graph metrics for each group on weighted undirected graphical links as described elsewhere (19)(20)(21)69): 1. Degree centrality: the number of connections (edges) within the network divided by the total number of nodes in the same space. This is a measure of overall connectivity of nodes within a given network or subgraph. 2. Normalized clustering coefficient: the likelihood that the nearest neighbors of a node will also be connected. This measure provides an index of parallel processing in locally interconnected (closed triples) in a network or subgraph. 3. Normalized characteristic path length: the shortest path length between two nodes averaged over all pairs of nodes in a given network (34,66). This measure was also normalized to the corresponding value from an equivalent random graph. 4. Small-worldness: the ratio of clustering coefficient to characteristic path length, normalized to the corresponding value from an equivalent random graph (70). This measure quantifies the ratio of segregation to integration of information sources in the network space. 5. Assortativity coefficient: the correlation coefficient between the degrees of all nodes on two opposite ends of a link (35,36,71). For a given network, the coefficient is described as assortative for positive values, neutral for ≈ 0, and disassortative for negative values. 6. Rich-club coefficient: the fraction of edges that connect a set of nodes of degree k or greater relative to the maximum number of edges possible in the subnetwork (72). The measure assays the ability of the "rich club" nodes to influence global network function by virtue of their dense interconnections (38,39,73). Vo et al. 28 These parameters were computed using the Brain Connectivity Toolbox (66) and an inhouse Matlab script (MATLAB R2020a). We note that group differences may be difficult to discern because of inclusion of random, non-specific links at low thresholds (r<0.3; graph density >60%), and graph disconnection at high thresholds (r>0.6; graph density <25%) (19)(20)(21). Network metrics are therefore presented for a range of connectivity thresholds (r=0.3 to 0.6, at 0.05 increments) corresponding to graph densities between 25% and 60%. By plotting the results over the range, we show that group differences for a given metric are valid over multiple adjacent levels. To visualize group differences in connectivity patterns, relevant network subspaces were displayed at minimum threshold (Level 1, r=0.30). For specific metrics such as the assortativity coefficient, exemplars in the range of the mean value ±0.5 SD were selected from the bootstrap samples described above. Individual graph configurations were represented as force-directed displays (74) using an in-house Matlab script (mathematics/graph and network algorithms toolbox, MATLAB R2020a). Statistical analysis Expression values for H-DytRP and S-DytRP were computed in the MAN, NM, SPOR, and HC groups and compared using ANOVA with the Tukey-Kramer HSD adjustment for multiple comparisons. The gain and loss in H-DytRP connections in MAN, NM, and SPOR (i.e., those present in these groups but not in HC (gain) or those present in HC group but not in these groups (loss)) was computed as a percentage of the total within and between core and periphery subgraphs, as well as across the whole network. The difference between the connections gained and lost inside (core-core) versus outside (core-periphery and periphery-periphery) the H- Vo et al. 29 DytRP core for each dystonia group (100 bootstrap iterations) was also computed, and group differences were evaluated as described above. These analyses were considered significant for p<0.05, corrected for multiple comparisons. For graph analysis, the bootstrapped data were used to assess group differences in the network metrics for the relevant graph or subgraph. Group differences in each of the metrics were evaluated using the general linear model (GLM) with post-hoc Tukey-Kramer HSD tests across graph thresholds. These analyses were performed using IBM SPSS Statistics for Windows, version 21 (IBM Corp., Armonk, N.Y., USA). Results were considered significant for p<0.05. Data availability Deidentified data will be made available on reasonable request from interested investigators for the purpose of replicating results. (75). c AAL = Anatomical Automatic Labeling atlas (62). The number given for each significant region denotes the standardized region-of-interest (ROI) from the atlas that was used in the graph theory analysis (see text). BA = Brodmann area. Bold font indicates core nodes and regular font indicates periphery nodes.
v3-fos-license
2018-07-04T00:14:13.380Z
2018-06-28T00:00:00.000
49563559
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/gea.21684", "pdf_hash": "ad50e28316882c016ed72bbc75b9cea41f384350", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41666", "s2fieldsofstudy": [ "Geology" ], "sha1": "f2e1dc5e3c7410e00098505e9daf14741c684979", "year": 2018 }
pes2o/s2orc
Geochemistry of Byzantine and Early Islamic glass from Jerash, Jordan: Typology, recycling, and provenance Twenty‐two objects of glass from the Decapolis city of Gerasa, N. Jordan, with characteristic vessel forms ranging from Hellenistic to Early Islamic (2nd century BCE to 8th century CE) were analyzed for major and trace elements, and 16 samples for Sr‐isotopes. The majority were produced in the vicinity of Apollonia on the Palestine coast in the 6th–7th centuries CE, and strong inter‐element correlations for Fe, Ti, Mn, Mg, Nb reflect local variations in the accessory minerals in the Apollonia glassmaking sand. The ubiquity of recycling is reflected in elevated concentrations and high coefficients of variation of colorant‐related elements as well as a strong positive correlation between K and P. The high level of K contamination is attributed to the use of pomace (olive processing residue) as fuel, and a negative correlation with Cl, due to volatilization as the glass was reheated. This points to an efficient system for the collection of glass for recycling in Jerash during the latter part of the first millennium CE. Differences in elemental behavior at different sites in the Levant may reflect the context of the recycling system, for example, glass from secular contexts may contain less colorants derived from mosaics than glass associated with churches. INTRODUCTION It is now generally accepted that from the late first millennium BCE until the late first millennium CE, the ancient glass industry was centralized. Large-scale natron glass production supplying the entire Eastern Mediterranean region was centered in only a few locations along the Palestine coast and in Egypt. Each production center produced unique glasses due to minor differences in recipe and the local raw materials, but common for the natron glass types is that they were made by mixing calcium carbonate-bearing sand with natron (soda) from salt lakes at Wadi el-Natrun or the Nile Delta (e.g., Brill, 1988;Degryse, 2014;Degryse & Schneider, 2008;Freestone, Gorin-Rosen, & Hughes, 2000;Freestone, Leslie, Thirlwall, & Gorin-Rosen, 2003;Nenna, Vichy, & Picon, 1997). These primary glassmaking centers exported the raw glass to population centers across the ancient world where secondary glass workshops remelted and shaped the raw material into vessels, windows, and jewelry. Whereas the general outline F I G U R E 1 Regional map of Syria-Palestine with location of the study site of Jerash as well as surrounding contemporary cities of Petra and Umm el-Jimal. Also shown are glass production sites along the Levantine coast at Apollonia, Jalame, and Bet Eli'ezer [Color figure can be viewed at wileyonlinelibrary.com] highest area within the walled city (Lichtenberger & Raja, 2015, 2017 ( Figure 2). The project explores mainly domestic complexes of this quarter of the city. Most of the excavated structures stem from the Late Roman to Early Islamic periods. During the excavations, evidences were excavated containing the inventory of houses and among the finds were glass vessels, most of them fragmented. Here, we present major and trace elements for 22 as well as Sr isotopic compositions for 16 glass artefacts excavated during the 2013 campaign of this project, selected to represent the range of glass forms encountered. The objectives of this study are twofold. The first is to determine the main glass types that reached Gerasa and how this reflects the supplies into the city and thus regional trade networks. The second objective is a detailed characterization of the contaminants and postproduction chemical signatures that became incorporated into the glasses during remelting in secondary glass workshops. The signatures provide clues about the local remelting techniques, furnaces, fuel sources, glass types mixed during melting and/or added colorants which, ultimately, reflect the inner workings and infra-structure of Gerasa within a local and regional context. Islamic periods (5th-8th centuries CE) but we have included earlier representative forms in the analytical sample set. SAMPLE MATERIAL Early period material is rarely encountered among the glass finds. Only a few sherds can be assigned to the Hellenistic period (336 BC-30 BC). They belong to cast grooved bowls showing a TA B L E 1 Composition of Corning B glass standard by electron microprobe in this study compared to recommended composition RESULTS All samples classify as low-magnesium, low-potassium (<1.5 wt% each of MgO and K 2 O) natron glasses (Lilyquist, Brill, & Wypyski, 1993;Sayre & Smith, 1961). Figure Figure 5). This is important for distinguishing between the different glass groups observed in our study. The three glasses which are typologically Roman are natron-based with CaO and Al 2 O 3 contents below 8 wt% and 2.8 wt%, respectively, and Na 2 O concentrations above 17 wt% (Table 2), consistent with Roman glass produced in the first centuries CE (Jackson, 2005). Three distinct types of Roman glass are recognized primarily on the basis of their contents of the decolorizers Mn and Sb (Table 4). Values are the means of six separate spot analyses. F I G U R E 6 Oxide ratio variation diagram of CaO/Al 2 O 3 versus Na 2 O/SiO 2 for Jerash glass groups, which discriminates between Levantine primary productions of glass produced at: (1) Jalame (4th century; data of Brill, 1988, supplemented with 3rd century Rom-Mn glass from Silvestri, 2008, (2) Apollonia (6-7th century, Freestone et al., 2000, Tal et al., 2004, supplemented with vessels from Phelps et al., 2016), and (3) Bet Eli'ezer (7-8th century, Freestone et al., 2000 and unpublished). Original plot lay-out from Phelps et al. (2016). All oxides are in wt% ( Brill, 1984). On the other hand, cat. no. 1 has only background Mn at 146 ppm and is amber which is generally attributed to the presence of a ferri-sulfide complex (Schreurs & Brill, 1984). It may be pertinent that this glass has the highest sulfur content of those analyzed (0.3 wt%) and the absence of added manganese (Tables 2 and 4) is a typical feature of amber glass (Freestone & Stapleton, 2015;Sayre, 1963), as it is an oxidizing agent and the generation of amber requires reducing conditions. Glass characteristic of Late Roman-Byzantine Palestine was originally defined as Levantine by Freestone et al. (2000) and this term has been heavily used in the literature. It refers to lime and alumina contents in excess of 8 wt% CaO and 2.8 wt% Al 2 O 3 combined with relatively low Na 2 O (<17 wt%) making them distinct from older 1st-3rd century CE Roman glass ( Figure 4; Table 2; Schibille et al., 2017). Levantine glass has been divided into Levantine I type characterized by 68-71 wt% SiO 2 and 14-16 wt% Na 2 O and Levantine II type with relatively higher 73-76 wt% SiO 2 and lower 11-13 wt% Na 2 O contents (Freestone et al., 2000). More recently, it has been recognized that the original grouping of Levantine I incorporated the products of at least two different primary productions (Al-Bashaireh et al., 2016;Phelps et al., 2016;Schibille et al., 2017) and that these can be more-orless separated on the basis of composition: 6th-7th century CE glass from Apollonia Tal, Jackson-Tal, & Freestone, 2004) and 4th century CE glass from Jalame (Brill, 1988). Figure 6 presents our data in terms of CaO/Al 2 O 3 and Na 2 O/SiO 2 ratios, which pull apart glass from the two production sites (Phelps et al., 2016) and also separate Apollonia glass from natron glass (previously Levantine II) from the Umayyad production site at Bet Eli'ezer, Hadera (Al-Bashaireh et al., 2016). The reference data for Roman period Jalame glass are supplemented with data for Rom-Mn glass from the 3rd century CE Iulia Felix wreck (Silvestri, 2008;Silvestri, Molin, & Salviulo, 2008) and for the Byzantine Apollonia-type with the analysis of vessels from Palestine (Phelps et al., 2016). Whereas the typologically Roman and Hellenistic glasses analyzed plot on the right-hand side of Figure 6, similar to 4th century Jalame, the 14 Late Roman-Byzantine (4th century CE or later, Table 2) samples which have natural levels of Mn or low Mn levels up to 500 ppm lie to the left, plot in the Apollonia field or have compositions which straddle the boundary between Jalame and Apollonia glasses. In addition to their dating, the low manganese contents of these glasses are consistent with their assignation to Apollonia rather than Jalame since deliberately added Mn is present in about half of the glass analyzed from Jalame (Brill, 1988) whereas this has not been observed in primary glass from Apollonia tank furnaces. Catalogue numbers 11a, 11b, 16 which have slightly elevated (≈300 ppm) Mn are separated in the analytical tables and in the figures, as they are shifted slightly towards the Jalame field (see above; Figure 6). However, the low Na 2 O contents of these glasses associate them with Apollonia-type production and we consider them as Apollonia-type glass, with admixing of a small amount of older Mndecolorized material. These are henceforth referred to as "low-Mn" as opposed to glasses with "background" levels of Mn. Cat. nos. 6, 7, and 14 have significantly elevated (> 2000 ppm) manganese and could be interpreted as Jalame-type glasses ( Figure 6; None of the glasses analyzed corresponds to the field of the majority of low-soda high-silica products of the 7th-8th century CE furnaces at Bet Eli'ezer ( Figure 6). Origins of primary glass types in Jerash The strontium isotope data indicate that most of our samples have 87 Sr/ 86 Sr ratios close to the Holocene seawater (and beach shell) value (14) Total procedural blank = 0.06 ng Sr a Two sigma analytical precision corresponding to the trailing digits. b Repeat analysis. groups and for primary glasses from Apollonia (data from Phelps et al., 2016). These fingerprint the Levantine coast as the source for the Apollonia glasses from Jerash, and show the strong similarities between the Rom-Mn, Hellenistic glasses, and primary Apollonia glass. Rom-Mn glass is generally considered to have originated on the coast of the Levant (Nenna et al., 1997), and these data are consistent with that view. The Hellenistic glasses are similar in major and trace compositions, and are likely to have originated in the same region. In the 1st century CE, the production of antimony-decolorized glass was established and it appears to have been preferred for more expensive items such as tableware with cut decoration . The Roman Sb and Roman Mn-Sb glasses from Jerash differ significantly from the Byzantine, Rom-Mn, and Hellenistic glasses in terms of Rb, Ba, and LREE (La, Ce) in particular (Figure 8). This supports the view that Rom-Sb glass was not a product of Palestine and more likely originated from Egypt (Degryse, 2014;Schibille et al., 2017). The results therefore suggest that glass used at Jerash from the Kamber et al., 2005). Our groups are compared to primary glass from Apollonia (Phelps et al., 2016). Jerash Byzantine glasses include low and high Mn groups. Note the logarithmic scale [Color figure can be viewed at wileyonlinelibrary.com] could be a sampling effect and small quantities of these types might have reached Jerash; this possibility will be further explored in later studies. Of particular interest is that no glass which might have originated in the tank furnaces at Bet Eli'ezer (Levantine II of Freestone et al., 2000) has been detected. The Bet Eli'ezer furnaces appear to have been in production from about 670 CE (Phelps et al., 2016). The absence of this glass type in Jerash suggests that none of the glasses analyzed date later than the third quarter of the 7th century CE. This is possible, as typologically all of the forms could date to late Byzantine times or earlier. Furthermore, Bet Eli'ezer-type glass has been identified at another site in Jordan, Umm el-Jimal, located away from the coast but some It is pertinent that the glass of the 2nd-4th century CE typically has a dark weathering patina, whereas the 6th-7th century CE fragments weather to an opaque white. The precipitation of manganese oxide in the weathered layers is well-known as the cause of the darkening of medieval European glass (Schalm et al., 2011) and it seems likely that this is also the case for Jerash, as the Roman-period pieces typically have high levels of MnO, whereas the later glasses typically have MnO at background levels. The Roman-period antimony-decolorized glass, cat. no. 5, has low manganese, and also weathers to an opaque white patina, consistent with these observations. Secondary processing phenomena: recycling in the Apollonia-type glasses The discussion below will focus on the Apollonia-type glasses since these are by far the most dominant group in our sample-set and because they show distinct features that relate back to production and postproduction processes in secondary workshops. The location of the glass workshops which made the vessels in Jerash have yet to be determined by excavation, but it seems very likely that, like other cities in the region in Late Antiquity they were in the immediate vicinity. A number of compositional effects might be anticipated from the mixing and remelting processes which comprise a glass recycling system: (1) mixing of different primary glass compositions, (2) contamination from the melting furnace/crucibles and iron glass working tools, (3) contamination with colorants and decolorizers from the incidental inclusion of old colored glass in the batch, (4) contamination by components of fuel and fuel ash, and (5) loss of volatile components to the furnace atmosphere. By definition, if a glass object is remelted to make a new one, then it is recycled, and evidence for remelting is therefore evidence for recycling. In terms of mixing different glass types, we have observed above that the Byzantine glasses with high Mn lie closer to earlier Roman glasses in terms of major components such as Na 2 O ( Figure 6) and that this is the result of mixing Apollonia-type glass with Roman Mndecolorized glass. No other compelling evidence of mixing of primary glasses is recognized here, but this process is implicit in some of the data, for example, the behavior of colorant-related elements, below. There is a strong correlation between Fe and Mg in the Apolloniatype glasses from Jerash which is also present for other transition metal elements such as Ti, V, and Nb (Figure 9). An enrichment in Fe has been observed in Roman Mn-Sb glass from York, UK and ascribed to contamination from ceramic melting pots or iron blowpipes . If the Fe was derived from an iron tool, departure from the trend with MgO would be expected and this does not occur (Figure 9a). Contamination from the furnace is a possibility, but the high Mg:Fe ratio of 1:1 indicates control from heavy minerals such as amphibole, pyroxene, spinel, and zircon (e.g., Molina, Scarrow, Montero, & Bea, 2009) rather than clays which are generally strongly dominated by Fe relative to Mg (e.g., Kamber, Greig, & Collerson, 2005). Therefore, the covariations in the Apollonia samples are a primary feature controlled by differences in the accessory mineral assemblage of the individual batches of coastal sand used for their production. A similar explanation has also been offered by Schibille et al. (2017) for FeO-MgO covariations in Rom-Sb glasses that they analyzed from Carthage. However, the Rom-Sb glasses reported by Schibille et al. (2017) have an MgO:FeO ratio of 2:1, relative to Mg and Fe covariation of 1:1 in the Jerash glasses ( Figure 9). These very different Mg/Fe ratios support the hypothesis (see above) that the Sb-decolorized glass did not originate in the Levant, but elsewhere, possibly Egypt. In addition to iron and other transition metal oxides, an increase in alumina concentration might be expected if a glass was significantly contaminated by furnace ceramic during remelting. Figure 6 shows no enrichment in Al 2 O 3 of the Jerash samples relative to Apollonia primary glass and we can therefore assume from this and the iron oxide that contamination of the glass from ceramics (furnace) during any recycling that occurred was minimal. This differs from the conclusions of and may reflect the arrangement of the furnace. In the Levantine region, there is limited evidence for the use of pots or crucibles in which glass was melted and it appears that even at the secondary stage, the glass was melted in tanks (e.g., Gorin-Rosen, 2000); this was also the case in larger centers in the West, for example Roman London (Wardle, 2015). There is evidence, however, for melting pots at York, UK studied by Jackson and Paynter (op. cit.). Tanks will typically have had a much larger volume to surface area ratio than pots or crucibles and the interaction between the walls of the tank bulk of the glass will have been correspondingly less, explaining the discrepancy. A distinctive group of elements, including Cu, Sn, Pb, Co and Sb, shows different behavior. Figure 10 shows that where crustal values are available (Kamber et al., 2005), these elements show a substantially higher level of enrichment in our glasses than those elements associated with accessory minerals. They are the elements associated with glass coloration and, following earlier studies (Freestone, Ponting, & Hughes, 2002;Jackson, 1996;Mirti, Lepora, & Saguì, 2000), it is considered that a significant component originates in the incidental incorporation of small amounts of earlier colored glasses in recycling processes. This effect of recycling on the distributions of these elements is conveniently illustrated in terms of the coefficients of variation (relative standard deviations) for the individual elements ( Figure 11). The colorant elements have very high CVs due to the imperfect nature of the recycling process and the failure to completely mix and homogenize separate glass batches. Furthermore, several element pairs show very strong correlations, such as Cu-Sn, and Pb-Sb F I G U R E 1 0 Trace element concentrations (ppm) related to colorants addition to glasses normalized to weathered continental crust (MUQ of Kamber et al., 2005). Our groups are compared to primary glass composition from Apollonia (Phelps et al., 2016). Jerash Byzantine glasses include low and high Mn groups. Note the logarithmic scale [Color figure can be viewed at wileyonlinelibrary.com] F I G U R E 1 1 Coefficients of variation (= relative standard deviations) for trace elements in for all Apollonia-type glasses with background levels of Mn (Table 2). Sand-related elements typically have low CVs whereas those associated with colorants are high. Elements associated with alkali and ash-U, Rb, and B are intermediate (R 2 of 0.75 and 0.81, respectively), and these appear to reflect specific coloring agents such as bronze scale and lead antimonate. The implication is that, whereas the Apollonia-type glasses from Jerash show features fully consistent with a single primary production, there has been significant recycling and this is reflected in the colorants. Furthermore, analysis of glass from tank furnaces on the Levantine coast indicates Pb values typically less than 10 ppm, and Cu values less than 5 ppm (Brems et al., in press;Phelps et al., 2016), whereas with only one exception, our Apollonia-type glasses contain higher levels (Table 4) suggesting that the great majority of the Apollonia-type glass analyzed here contains some recycled material. Influence from fuel and furnaces during recycling in Jerash Evidence that the Apollonia-type glasses had been through one or more episodes of recycling has been inferred above from the colorant element concentrations. The effects of workshop practices on glass composition have been explored experimentally by Paynter (2008) F I G U R E 1 2 Oxide variation diagram of P 2 O 5 versus K 2 O in wt% (representing contamination from fuel ash) for Jerash Byzantine glass groups with background and low Mn compared to Byzantine glass compositions observed at other Levant cities. Data for Petra, Deir Ain Abata (Rehren et al., 2010), Ramla, Israel , and Umm el-Jimal, Jordan (Al-Bashaireh et al., 2016). Primary glass from Apollonia from Phelps et al. (2016). R 2 value is for fitted regression line through Jerash glass group [Color figure can be viewed at wileyonlinelibrary.com] who showed that in addition to accumulation of Al and Fe from the melting pot, remelting of Roman-type soda lime glasses in reconstructed Roman glass furnaces subjects the glass to contamination by K from fuel ashes and/or vapors. While we can expect the contaminants to have been strongly controlled by the type of glass, the fuel, the firing temperatures and the type of clays used to make the furnace, Paynter's study provide some important clues about potential influences from furnace and fuel. Concentrations of K 2 O and P 2 O 5 in Levantine I glasses from Jerash are high, up to 1.33 and 0.21%, respectively. Not only are these values twice as high as in glass from the primary furnaces at Apollonia (Freestone et al., 2000;Tal et al., 2004), but these two components are strongly correlated (R 2 of 0.88 in Figure 12). The K 2 O and P 2 O 5 correlation observed for the Jerash glasses is most likely the result of interaction with the fuel ash and fuel ash vapors during remelting and/or working as has been observed for Apollonia-type glass at other contemporary sites such as Petra, Jordan (Rehren, Marii, Schibille, Stanford, & Swan, 2010), Ramla, Israel and This may be due to the configuration of the Jerash furnace(s), so that the glass was protected from contamination by solid ash, and the contamination was largely from the vapor, but it could also be due to the type of fuel used. A plausible fuel for Jerash is olive pits, given the finds of olive crushing mills and olive pits in many layers in Jerash. There is little doubt that olives have played a role regionally and oil production in general was significant (e.g., Ali, 2014). Rowan (2015) has drawn attention to the extensive evidence for the use of pomace, olive-pressing waste, as a fuel in antiquity. Olive pits as fuel for glass production are particularly suitable, since their fire burns hotter than wood and therefore they have excellent qualities for glass melting. Large amounts of charred olive pits were found close to glass furnaces in Beth Shean (Gorin-Rosen, 2000) and Sepphoris (Fischer & McGray, 1999), but until now evidence for the actual use of these for firing has not been drawn from the chemistry of the glass samples. Data on the chemistry of olive residues is available due to modern interest in their potential as a biofuel. CaO for the ash of "olive residue" (Gogebakan & Selçuk, 2009). It is clear that the potash to lime ratio of olive pit/residue ash is significantly higher than those of most hard and soft wood ashes, in which lime is generally in excess of potash (e.g., Misra, Ragland, & Baker, 1993). Therefore, furnaces operating with a high proportion of olive pits in the fuel would produce ash with substantially more K 2 O than those firing mainly wood. The high level of enrichment of potash observed in this study and in other glasses from Jordan strongly suggests that olive pits were a significant component of the fuel used, consistent with the archaeological evidence from the region. Miranda et al. (2008) report that their olive pit ash also contained 3.43% P 2 O 5 , which would volatilize and explain the correlation observed between phosphate and potash. We observe a negative correlation between potash and chlorine, which has previously been observed in glasses from Umm el-Jimal originates from the natron and would normally be expected to show a positive correlation with soda (Na), also coming from the natron, and be stabilized in the melt due to sodium-chloride (Dalou, Le Losq, Mysen, & Cody, 2015). However, given the volatile nature of chlorine, as well as the alkalis, repeated melting, particularly at high temperature, inevitably leads to Cl (and to a lesser degree alkali) loss (Freestone & Stapleton, 2015). This does not explain the antithetical relationship seen for Cl and K in Jerash Byzantine glass ( Figure 14). As for Umm el-Jimal and Petra, we ascribe this correlation to a combination of recycling (leading to chlorine loss) and contamination by fuel ash (leading to increased potassium). Moreover, the strong negative K-Cl correlation (R 2 = 0.63) compared to other sites in the region (Umm el-Jimal at 0.25 and Petra at 0.24), in addition to even stronger positive K-P correlation (R 2 = 0.88; Figure 12), suggests that glass recycling was more intensive at Jerash. Jackson, Paynter, Nenna, and Degryse (2016) (Carroll, 2005;Metrich & Rutherford, 1992;Veksler et al., 2012). This expectation is realized for medieval and postmedieval glasses where Na and Cl are positively correlated (Schalm, Janssens, Wouters, & Caluwé, 2007;Wedepohl, 2003) and supported by evidence of immiscible droplets of sodium chloride in ancient Cl-rich glass soda lime-silica glasses (Barber & Freestone, 1990; Barber, Freestone, & Moulding, 2009). Finally, the importance of alkali-Cl complexes in the melt and inevitability of Cl loss during fusion are corroborated by the relatively high chlorine contents of Roman amber glass which have been suggested to be the result of relatively short melting durations used to preserve the color (Freestone & Stapleton, 2015). In conclusion, our observation that chlorine abundance is antithetical to potash for Jerash Byzantine glass and the lack of demonstrable correlations with soda and lime is consistent with recycling, and thus not a feature of primary glass production. We are not proposing that Cl abundance is a universal tracer of recycling, melting duration or melting temperature, but rather, when one considers glasses of similar major element composition from similar technological context, chlorine content coupled with correlations (or not) with other glass constituents is a useful indicator of recycling. Compositional dependence upon the context of the glass recycling economy The Jerash data emphasize the complexity of the glass recycling process and the dependence of the composition of the recycled glass upon the local social context. It has been observed that the characteristics of recycling differ from those in some western contexts, such as York, as contamination from container ceramics is not apparent. Furthermore, the elevated values and strong correlations for potash and phosphorus observed here are not as apparent in western Roman glasses which are believed to have been recycled (e.g., Freestone, 2015;Silvestri, 2008) or even in Apollonia-type glass from Israel (Phelps et al., 2016) and this may be related to the fuel used. It is also noted that in Umm el-Jimal, in northern Jordan, Apolloniatype glasses show a greater overall enrichment in trace metals generally added as colorants, where Cu and Pb enrichments are detectable using EPMA rather than the trace levels observed here (Al-Bashaireh et al., 2016). This is likely to stem from the nature of the reservoir of glass undergoing recycling. The Umm el-Jimal glass was recovered from churches where storage of colored glasses from mosaics for recycling might be expected, as has been observed at Petra (Marii & Rehren, 2009). We speculate that this led to relatively high contents of glass colorants in the glass from Umm el-Jimal. The glass from Jerash analyzed here originates from domestic houses, shows relatively weak enrichment in colorant elements but strong evidence of recycling in the fuel-related components. There is substantial archaeological evidence from Jerash which attests to the collection of glass possibly for recycling. Such glass heaps stem from the churches and especially from the passage north of St. Theodore and from a room under the north stairs from the Fountain Court (Baur, 1938 in: Kraeling 514-515). Also, evidence for already recycled material in the form of glass cakes probably prepared for the production of glass tesserae has been found in Jerash in the so-called Glass Court (Baur, 1938 in: Kraeling 517-518). Future work, comparing glass associated with the churches with that from more secular, domestic contexts, might cast light on the organization of the glass industry in the Levant at this time. CONCLUSIONS The excavated glass from Jerash, Jordan, dating to between the Hellenistic and the Late Byzantine periods, derives mainly from the Levantine coast with some, possibly Egyptian, antimony-decolorized glass in the Roman period. The Byzantine glass, which dominates the assemblage, derives mainly from the tank furnaces located in or around Apollonia. A consideration of the manganese contents of the Apollonia-type glass indicates that it is generally present at background levels, and where present is the result of remelting and mixing of Roman glass during recycling. Significant evidence for recycling is observed in the form of elevated potash and phosphate contamination from the fuel, as well as elevated transition metals. Concomitantly, there was a depletion in chlorine, due to volatilization at high temperature. For the first time, we draw attention to the effect of recycling on the coefficients of variation of trace elements in the glass. These types of indicator can provide clues as to the relative intensity of the recycling process, which elemental concentrations alone do not. Despite the apparent proximity of Jerash to primary glass production sites near the Levantine coast, an efficient system for recycling of old glass must have been in place. The implication of a well-organized recycling system in Jerash suggests limited glass import from the Levantine coast and elsewhere, which is supported by the finds of only few Roman glasses and a lack of Egyptian-type glasses. The localized nature of recycling in Jerash displays important regional differences, which we relate to differences in interaction zones and proximity to the production sites at the Mediterranean coast. The characteristic K-enrichment observed at Jerash and other Levantine locations has implications for the type of fuel used and is likely to indicate a significant component of olive-pressing residue. Differences between Jerash and other sites in the region such as Umm el-Jimal suggest that the nature and degree of over-printing of primary compositions by secondary recycling processes are specific to the context within which the recycling took place. Technological factors relating to local practice, as well as the types and quantities of glass available for recycling, may provide a fingerprint of the secondary workshop. In favorable circumstances, this may allow the attribution of glass vessels to secondary workshops through elemental analysis. The phenomenon of recycling fits well to the overall economic situation of the cities in the region of the 7th-8th centuries CE. The towns underwent a considerable process of on the one hand urban industrialization and on the other localization of trade networks (see e.g., Avni, 2014, 290-294;Walmsley, 2000, 305-309;321-329;335-337). Both processes are apparent in recycling which attests to local production within the city and limited supply of (or high demand for) raw materials.
v3-fos-license
2018-12-22T07:55:18.329Z
2012-11-07T00:00:00.000
154708865
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://academicjournals.org/journal/AJBM/article-full-text-pdf/B19FFC916404", "pdf_hash": "649cb52d86ff22e5b30c0c521af4d84d89371b5a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41670", "s2fieldsofstudy": [ "Business", "Education" ], "sha1": "649cb52d86ff22e5b30c0c521af4d84d89371b5a", "year": 2012 }
pes2o/s2orc
Impact of employer brand-equity promotion for effective talent recruitment of fresh graduates in Pakistan The current study was an attempt to investigate the impact of employer brand equity towards effective talent recruitment in telecommunication companies of Pakistan. As suggested by Ting and Paul (2011), the study also sought to test the mediating role of educational institutions. Two types of survey questionnaires, adapted (with slight modifications) from the study of Collins and Stevens, were used to obtain the data from 200 fresh university graduates and 273 employees of the five cellular companies located in Islamabad and Rawalpindi. SPSS 18 and AMOS 18 software were used for data analyses. Data received from the university graduates was used for rating these companies against different parameters, for which cross tabulation was used to statistically test the results. Whereas, the data collected from the employees was used to measure the mediating effect of educational institutions in relationship between employer brand equity and effective talent recruitment. The results of structural equation modelling show that the educational institutions play partially mediating role in hypothesized relationship. The study recommends that the organizations should seek help from the educational institutions to improve effectiveness of their brand equity and talent recruitment. The study has implication for organizations as well as the academicians for further research on the topic, including other variables. INTRODUCTION Employment branding and branding are the two primary sources of this study. First, there is a long history of the study of organizational attraction and secondly, there is tremendous growth in cross flow of research merge branding concepts from the consumer psychology (Turban et al., 2009) literature of organizational attraction and applicant job search behaviour (Gardner et al., 2010). This merging of employment brand with product and service brands is proficient, simply by recognizing employment as a monetary exchange between workers and employers and also resulting into improved brand equity promotion along with improved recruitment *Corresponding author. E-mail: amarabaloch@gmail.com. process (Melin, 2005). Organizations traditionally hire new employees by determining job vacancies, advertising open positions, applicant's valuation and response to vacancy announcements, screening of applications and extending the job offer. There may be some other common paths used to complete the recruitment process. Conventionally speaking, the recruitment process is of two categories. One is of the employees who are already working somewhere and are known as the knowledge workers and second is the fresh talent recruitment. In this study, we will focus on the second category which is about the fresh talent recruitment by developing the strong and attractive brand image of employer through the educational institutions. On the other hand, it does not mean that employer brand equity does not exist, but it is about creating the awareness of the company before they can influence the job seekers; because what an applicant know about company affects his or her behaviour in the application process. Though organizations generally focus their branding efforts towards developing products and corporate brands, branding can also be used in the area of human resource. The application of branding principal to human resource management has been termed as "employer branding". Employer branding is defined as "a targeted, long-term planning to manage awareness and perception of employees, potential employees, and related stakeholders with regards to particular firm" (Sullivan, 2004). The employer brand puts forth an image showing the organization as a good place to work (Sullivan, 2004). Employer branding is a concept of developed countries where the recruitment process is already in mature phase. Pakistan is lacking in maturity of this process which is a missing element that results into less employer brand equity promotion in the fresh talent. If we talk about this concept of employer branding in Human resource area in Pakistan, there is no substantial study available which can be referred. It is a new area where the empirical study was required to make the concept obtainable. Currently in Pakistan, online recruitment and job posting is mostly exploited by the large organizations. Looking into the recruitment process of the famous companies of Pakistan as given by the largest job search engine (Rozeepk.com), Figure 1 depicts percentage of respondents which shows the substantial need for the employer brand awareness promotion. If we come across region wise on respondent's ratio, it is as shown in Figure 2. Rationale of the study In this study, the emphasis will be on developing new fad of companies brand equity in the mind of fresh talent through their educational institutes by the business organizations. In return, this will create repute of the employer in the mind of fresh graduates and will make their thought process mature while seeking for a job. According to Jiang and Iles (2011), it is "the process that leads employees and prospective applicants to be attracted to remain with the organization". "Researchers have tended to examine a narrow range of practices, even though there are a variety of human resources practices and other organizational activities that may attract potential applicants" (Barber, 1998). Aim of the study The aim is to develop an understanding of how organizational recruitment practices may affect fresh graduate job application decisions. It is about creating a strong relationship between industry and academia and developing employer specific brand equity in fresh graduates through their educational institutes. This study is going to cover two things. Firstly, it is going to assist the fresh graduates to decide and bring clarity in their mental attitude for applying the relevant jobs. Secondly, it will enable the companies to first create branding equity of their company before they can influence job seekers. Critical analysis and gap identification The proposition of the study was focused towards the investigation and analysis pertaining to the relationship building between industry and academia for effective talent recruitment through promotion of employer brand equity in conjunction with educational institutions. Employer branding is a relatively new approach towards recruiting and retaining best possible human talent in developing countries. A recent study was carried out by Gul et al. (2011) in Pakistan about the employer brand equity but they only measured the perceived brand image in the labour market. They assumed that employer brand equity was already prevailing in the market. Pakistan's educational institutions have been producing about two million fresh graduates of different disciplines to the labour market each year. Up till now, Pakistan's education system has been directing the huge bulk of these new masses to enter the labour market. The outcome was twofold bind where fresh talent are unable to find relevant place in the market and at the same time, employers grumble that they cannot find required fresh applicants for job postings. This study was carried out to fill the gap mentioned in the Ting and Paul (2011) study about employer branding through universities in the mind of fresh graduates and strengthening the industry academia relationship. The gap or suggested future study by them was to investigate how fresh graduates value company image as an employer in University recruitment and how the healthier relationship among the organization and educational institutions might fortify charisma of employer branding to generate awareness in fresh graduates in particular. Research objectives The objectives of the study were to: i. Find out the relationship between employer brandequity and effective talent recruitment in mobile phone companies of Pakistan ii. Investigate the mediating role of educational institutions in relationship between employer brand equity and effective talent recruitment iii. Obtain the rating of different mobile companies of Pakistan as prospective employer for fresh graduates, against different parameters Research question 1. What significant relationship exists between the employer brand equity and effective talent recruitment in mobile companies of Pakistan? 2. To what extent do the educational institutions mediate the relationship between employer brand equity and effective talent recruitment in mobile phone companies of Pakistan? 3. How do the fresh graduates of Pakistan perceive different mobile phone companies as prospective employer against different parameters? Delimitations of the study Limitations of the study were the recruitment strategies of the organizations and the existing culture which prevails in the labour market. There are other aspects of brand equity promotion for recruitment but the current study will only cater the given context in the model. Our focus group for the study was the fresh graduates of today. Other groups of the labour market have been excluded in this research. Concepts and definitions Employer brand equity: Employer branding is known as the latest trend in the employment strategy and a worldwide notion which belongs to the effort constructing the uniqueness as an employer apparently. "Employer branding is a term often used to describe how organizations market their offerings to potential and existing employees, communicate with them and maintain their loyalty; advancing, internally and externally in the organization, a clear view of what makes a firm different and desirable as an employer" Backhaus and Tikoo (2004). Effective talent recruitment: The impact of globalization has squeezed the world and companies are continuously expanding their pawmarks, the want for a universal impend to talent attainment and the capability to access eminence domestic adept is a key for accomplishing decisive factor. The primary rationale of hiring is to build up masses of competent individuals. Educational institutions: Companies are now becoming conscious to invest in building up and overseeing masses of competent individuals globally in order to generate a legitimate competitive gain which ultimately in return can construct a sturdy relation with the academic bodies to hire fresh graduates. LITERATURE REVIEW "The hottest strategy in employment" is known as employment branding (Sullivan, 1999). "It is a long term planning of the organizations to the shortage of talent problem. Most employment strategies are short-term and reactive to job openings; building an employment brand is a longer-term solution designed to provide a steady flow of applicants". For Feldwick (1996), "conceptual confusions surround brand equity as a gauge of brand attraction or value. Brand equity evolved from concept of brand image in the 1980s when brand values became apparent in financial terms and could be seen as the total value of brand as a separable asset". Employer branding Baloch and Awan 10909 is the course of action of placing a reflection of a magnificent place to work (Barrow and Moseley, 2005) in the mentality of the fresh graduates. Product branding is a process of building up an enduring image in the brain of its end users; it will finally effect and link the quality of product or service available by the concerned business group. Employment does the same; it creates the image that creates people want to work for the organization because it is well managed where employees continuously learn and grow. "Once the image is developed, it generally results in steady flow of talented applicants. Employment branding uses the tools of marketing research, public relations (PR) and advertising to change the image applicants have of what it is like at the organization" (Sullivan, 1999). The concept of branding in HRM was first introduced by Ambler and Barrow (1996), depicting the employer as a brand and the employee as the end user. They describe employer branding as the box of purposeful, monetary and emotional benefits offered by employment and recognized by the employer. "Customer-based brand equity refers to beliefs held by individual consumers about a product's or a service's brand (that is, perceptions of the name or logo) that affect their preferences and purchasing decisions relative to other unbranded products or services with similar attributes" (Aaker, 1991;Keller, 1993). The term recruitment engrosses seeking out for and attaining prospective job applicants in adequate figures and superiority so that the organisation can plump for the suitable individuals to pile up its job requirements. Mostly, organizations are spotlighting their recruitment on embryonic professionals and fresh graduates. As a result, researches are putting attention particularly on the significance of these embryonic professionals. As per their view, these fresh graduates are assumed to be the potential prospect of this industry. Establishing well-built relations with fresh graduate masses provides the company an exclusive means to profile them as future employers and better augment their corporation likeness. "Research has indicated that one major determinant of an organizations ability to recruit fresh graduates is organizational reputation (Cable and Turban, 2003), referring to the status of a firm's name relative to competing firms" (Belt and Paolillo, 1982;Gatewood et al., 1993). We have taken reputation as comparable to a trademark and have taken employer brand equity in this regard. The recruitment narrative has shed light on job search verdicts as the eventual conclusion of their significance. Research on employer familiarity and knowledge has blossomed within the past decade. Cable and Turban (2001) has proposed that employer knowledge is comprised of familiarity, employer image (beliefs about employers) and reputation (perceived firms' evaluations held by other people); employer images, in turn, have been described as containing both instrumental (job and organizational attributes) and symbolic content. Employer images consistently shows stronger relationships with job seekers intentions and attractions (Chapman et al., 2005) and have been shown to increase as a result of exposure to news stories (Slaughter et al., 2004), product and company provided information, and word of mouth transmission (Cable et al., 2000). Educational institution's student resource centres will be used as talent pools. It is a mechanism of HRM which is principally in the framework of staff hiring. If organizations have to pile up a job opening, they will be able to portray the talent masses and there seem to be ways of harmonizing the profile in the right candidates and job vacancies. Companies will first create awareness of their company image before they can influence fresh graduates. Visits from organization's HR department to student resource centres of the educational institutes will update the information about the recruitment process and new employment openings in the organization. In this regard, organizations will need to implement information recruitment practices, therefore their recruitment sources will provide line up in general and positive images of the company using recruitment sources. They will provide more extensive information to fresh graduates through detailed ads, brochures, web sites and organizing events in campuses. These activities will propel the familiarity of the employer in the minds of fresh graduates. Database developed in the student resource centres will be according to their educational programs so that appropriate job offers can be conveyed to their relevant graduates. Publicity, word of mouth, advertisement and sponsorship will form the key attributes for fostering employer brand equity. Theoretical framework In Figure 3, employer brand equity will be developed in the minds of fresh graduates through the educational institutions. Therefore, the following hypotheses were developed based on our proposed theoretical framework: H 1 : Publicity as a component of employer brand equity is positively associated with effective talent recruitment in mobile phone companies of Pakistan. H 2 : Word of mouth as a component of employer brand equity is positively associated with effective talent recruitment in mobile phone companies of Pakistan. H 3 : Advertising as a component of employer brand equity is positively associated with effective talent recruitment in mobile phone companies of Pakistan. H 4 : Sponsorships as a component of employer brand equity is positively associated with effective talent recruitment in mobile phone companies of Pakistan. H 5 : Educational institutions of Pakistan play mediating role between employers and university students to create effective brand equity leading to the effective talent recruitment. Research design The main source to gather the data was through questionnaires floated in the universities of Islamabad and in the selected organizations. The items used were adapted from the previous studies with the permission of the authors. Questionnaires were based on the interval scale, that is, 5 point Likert scale. The main purpose was to measure the employer brand equity in the minds of fresh graduates as their future perspective employer. Research focuses on the telecom sector of Pakistan and has floated the instrument to measure the five existing companies' employer brand equity in the mind of fresh talent. Comparison among the companies analysis is made to find out the strong employer image. Sampling criteria The process of convenient and purposive sampling is adopted in the research by floating the questionnaires in the educational institutions and offices of the selected organizations of Islamabad. Population and sample size The targeted population for this study was the fresh graduating students from the universities and the employees of the selected organizations of Islamabad. A sample of 250 students was selected and questionnaires were distributed in various universities and a sample of 350 was selected from the employees. A total of 531 were returned, out of which 473 were found to be accurately filled and up to mark for further statistical analysis. Therefore, a response rate of 78% was achieved. Type of study and time horizon This study is of descriptive type and is a one shot study that is cross sectional in nature. Instrument Two types of questionnaire were used for the collection of the data. One was floated among the university final semester students and other was floated in the employees of the selected organizations. The questionnaire used in this study is adapted from Collins and Stevens research conducted in 2002 in USA. Employer brand equity dimensions measures were patterned as per according to the Pakistan environment. The items of the questionnaire were measured at five point Likert scale in which 1 was showing mostly and 5 was never. Instrument contained a total of 35 items in which publicity, advertising, word of mouth and sponsorship had 5 items each while employer awareness, information, familiarity and responsiveness were having 3 items each. Dependent variable had only one dimension and it was measured by 3 items. The questionnaire floated in the organizations was slightly modified in scale to obtain the opinions of employees. The questionnaire floated among the students was used for rating the companies against different dimensions. DATA RESULTS AND ANALYSIS We collected the data from the university students about the five telecom sector organizations of Pakistan and from the employees of the selected companies to measure the existing employer brand equity in the minds of these respondents. (2006), a value of alpha greater than 0.60 represents acceptable range of inter item consistency for a newly developed instrument. The mentioned values in Table 1 sufficiently fulfil the desired criteria for a reliable instrument. The data was further tested for correlation amongst the variables. Baloch and Awan 10911 To check the assumption of multicilinearity among the variables, correlation analysis was carried out. Table 2 results show significant correlation ranging from 0.121 to 0.565 which is well within the safe range, satisfying the assumption. As stated by Sekaran (2006), correlation beyond the range of 0.80 reflects multicolinearity amongst the variables, which was not the case in present study. Table 3 shows that the H 1 , "publicity as a component of employer brand equity is positively associated with the decision making about the employer, amongst the fresh graduates of Pakistan" has been rejected by the study. This also shows that publicity, without mediation of educational institutions, does not affect the decisions about talent recruitment in selected companies of Pakistan. Similarly, advertising and sponsorship also show insignificant and negative relationship with decision making, rejecting H 3 and H 4 , respectively. However, word of mouth as a component of employer brand equity has shown significantly positive association with effective talent recruitment, with a β and p values of 0.179 and 0.014 respectively. Hypothesis testing To check the model fitness and the mediating role of the educational institutions between the employer brand equity and the effective talent recruitment, structural Key: Pubpublicity, Advadvertising, Wom -word of mouth, Sposponsorship, EA -employer awareness, EI -employer information, EF -employer familiarity, Resresponsiveness, DM -decision making. **Correlation is significant at the 0.01 level (2-tailed). *Correlation is significant at the 0.05 level (2-tailed). Tables 4 and 5 show the results of first and second levels CFA analysis (Figure 4). During the first level analysis, first item of independent variable "publicity" naming "company reputation" with factor loading of 0.411 was removed. Some other items also had factor loading less than 0.50, however, they could not be removed in the first level as having interrelationship with other significant items. During the second stage of CFA, some other items causing factor loading less then the desired value of 0.50, were also removed. Item number 4 "top officials of the company are on media" and 5 "employees are loyal with the company" from the first dimension of independent variable publicity were removed due to lower factor loading. Similarly item numbers 1 and 5 " job advertisements in newspapers" and "job advertisements are clear and show required information" respectively, from "advertisement" could not qualify the criteria and got removed. Item number 3 "PR of organization" and 5 "reliability as an employer" were also removed due to lower factor loading problem. From sponsorship item number 4 "offer any student study loans" and item number 5 "sponsorship for university magazines" were also removed. Revised model is shown in Figure 4. To improve the model fitness and relevant indexes, several modifications, as suggested by the software, were applied one by one. On the whole, 12 modifications were applied. Table 6 shows the results of model summary indexes, where the direct and indirect models have been compared for the fitness. The CMIN/DF value (chi square/ DF) value shows good fit, that is, within the range of 1 to 3. However, the indirect model shows better strength with a value of 1.905 in comparison with direct model. The value of goodness of fit index (GFI) as 0.872 for each model reflect a good fit, as stated by Raykov and Marcoulides (2006) that it should be close to one. Root mean square residual (RMR) and root mean square error of approximation (RMSEA) should be closer to zero which are 0.114 and 0.058 respectively in the case of current study which fulfils the criteria. The ideal values of critical fit index (CFI) and Trucker Lewis index (TLI) are also closer to one, which also fulfil the criteria in this case with values of 0.869 and 0.847 for direct model and 0.869 and 0.848 for indirect model. Although the model fit indices for both models remained equal in most of the cases, slight difference in TLI shows that indirect model is better than the direct one and confirms the partial role of mediator. As hypothesized, educational institutions of Pakistan were found playing partial mediating role between employers and university students to create effective Adv1 < brand equity leading to the decision making about the prospective employers. The modified SEM models (direct and indirect) were tested for mediation and significance of the paths. Figures 6 and 7 show the results of direct and indirect models respectively. The direct model shows the relationship between employer brand equity (IV) and effective talent recruitment (DV) without controlling the direct path which shows fulfilment of all the three assumptions of Baron and Kenny (1986) method for partial mediation. As shown in the Table 7, the first path that is from independent variable EBE to mediating variable EDI is significant with a coefficient of 0.743. Similarly, the second path that is from mediating variable EDI to the Baloch and Awan 10913 Adv2 dependent variable ETR is also significant with a beta value of 0.995. The third path that is EBE to ETR was found to be insignificant with a significance value of 0.505, which fulfils the requirement. However, the beta estimate, which was to supposed to be zero for perfect mediation, came to be -0.077, showing partial mediation. By controlling the direct path, the value of relationship between EBE and EDI reduced to 0.731 and between EDI to ETR 0.916 showing slight variations from the direct paths. This also confirms the partial mediation affect. In order to compare the selected companies in terms of promoting employer brand equity amongst the University students through their respective educational institutions, cross tabulation of each item was carried out. As shown in Table 8, the comparative analysis depicts that the items of Mobilink was ranked as having the strongest reputation in the minds of fresh graduates while Warid is the weakest one as far as the items pertaining to the first independent variable publicity is concerned. Telenor was ranked excellent in terms of providing good corporate culture, social welfare, media representation and employyees loyalty towards the company. Over all, in terms of publicity, Telenor is the mostly known company by the fresh graduates. The results pertaining to the advertisement as a dimension of the employer brand equity was rated as highest in favour of Mobilink, however, on the whole, mixed opinions were found for different items. The results show that Mobilink dominates in publishing the job advertisements in different newspapers and sending the alumni of the same institutes as their brand ambassadors and post; whereas, Ufone is rated highest for attractive job brochures. Telenor led by attaining highest points among others in billboards and Zong in terms of providing clear and proper information required for the job. The university students considered Mobilink as the mostly heard company as far as the third dimension of employer brand equity "word of mouth" was concerned, where as Telenor was found to be the highly attracting organization. Ufone was rated as the best company in maintaining and managing in public relations. Ufone and Telenor were in close rating proximity in employer band image. While in employer reliability, Telenor took the lead from its competitors. On the whole, in word of mouth, Telenor and Mobilink almost stand together. Zong stands as the most event sponsoring organization among others while Telenor was ranked at the top in granting study scholarships and study loans to the students. As per respondents of the study, Zong sponsors the seminars and workshops in the campuses more commonly. Mobilink was rated as "mostly" in sponsoring the in-campus magazines and newsletters. On the whole, sponsorship was rated at the lowest for Warid, whereas the other four companies remained close competitors to each other. Paths This study reveals that employers are not effectively using the educational institution to disseminate their brand equity, which was evident from the responses as being rated "neutral" for most of the items. Mobilink was rated as highest in paying visits to campuses while Warid remained at the lowest. The high frequency of e-mail prompts pertaining to the job openings lies with Ufone. Telenor and Ufone were found as the major event organizers in educational institutions. Regarding the information supply of organizational working atmosphere and detailed job posting, Mobilink was at the top whereas, summer internships were mostly offered by Zong thereby depicting overall unbalanced information of employers. The employer brand familiarity is measured by three components. Telenor was ranked as the best and Warid the worst alumni preferred employers. The participation in the job fair was mostly done by Ufone and Telenor. As far as the third component, "arranging organizational familiarity seminars" was concerned, Telenor and Warid remained at the top and bottom respectively. Similarly, in "responsiveness", Telenor and Ufone were rated as the most and Warid as the least responsive organizations by the respondents. Telenor lead as the organization of choice as an employer in the mindset of fresh graduates, while responding the questions pertaining to dependent variable, decision making. The overall rating of all the selected organizations in terms of employer brand equity remained neutral in the mindset of fresh graduates as the highest rating went on to "neutral ". By applying post hoc multiple comparisons, we compared the companies. In this study, students were asked to rank the given companies according to the image they had in their mind about these companies. Multiple comparison analysis was done to check the existing image of employer brand equity of the selected organizations in the minds of the students. DISCUSSION "Customer-based brand equity refers to beliefs held by individual consumers about a product's or a service's brand (that is, perceptions of the name or logo) that affect their preferences and purchasing decisions relative to other unbranded products or services with similar attributes" (Aaker, 1991;Keller, 1993) seen in this study, such as emphasizing the need of creating employer brand equity among fresh talents in Pakistan in their decision making process; and complementing the concept of branding in HRM which was first introduced by Ambler and Barrow (1996), depicting the employer as a brand and the employee as the end user. As per the study, the employer brand equity is the most important element which needs to be cleared in the minds of fresh graduates so that they can better go on decision making about their future employer. The result of our study has evidenced that once the image has been developed in Pakistan's culture, it generally results in steady flow of talented applicants (Sullivan, 1999). The study analysis has demonstrated that none of the selected organizations of Pakistan showed strong presence in all attributes of employer brand equity. As per Sullivan (1999), employment branding uses the tools of marketing research, PR and advertising to change the image applicants have of what it is like at the organization. Our respondents appeared to be attracted by the PR tools used by the Ufone and have a good image of Mobilink and Telenor because of their advertisement campaigns. Good corporate culture image has splendid impact in promoting the notion of an organization as a good place to work as per the respondents thereby proved that employer brand puts forth an image showing the organization as a good place to work (Sullivan, 2004). The study has indicated the need of an employer to be familiar with the university students as their future prospective workplace which was supported Cable and Turban (2001) that employer knowledge is comprised of familiarity, employer image and reputation. The study has maintained the impression that employer images consistently shows stronger relationships with job seekers intentions and attractions (Chapman et al., 2005) and has shown to be enhanced as a result of exposure to news stories (Slaughter et al., 2004) through media, as supported by the respondents. The study has strongly influenced the significance of information to be provided and word of mouth transmission by the companies (Cable et al., 2000). Company reputation has a strong impact in the minds of university students for their prospective employer which indicated that one major determinant of an organizations ability to recruit fresh graduates is organizational reputation, referring to the status of a firm's name relative to competing firms (Belt and Paolillo, 1982;Gatewood et al., 1993). Overall three hypotheses were rejected and two were accepted. Looking at the labour market culture of Pakistan, the results have practical implications. Organizations are not paying any special attention of creating their brand equity to attract the new market entrants. Word of mouth is the only source of creating awareness about the organizations employer brand equity; interestingly, organizations have no role in that. In general, the bottom line of results shows neutral ranking of employer brand equity resulting into the need of establishing a strong relationship between academic institution and industry so that academia can be well-built in creating and promoting employer brand equity of the organizations in the mindset of fresh graduates. The proposed model in this study shows the mediating role of the educational institutions; employers using this means Baloch and Awan 10923 can build a strong future labour force of their own choice. CONCLUSION AND RECOMMENDATIONS Current study shows that educational institutions have a partial mediating role between employer brand equity and effective talent recruitment in mobile phone companies of Pakistan. However, the direct relationship between employer brand equity and effective talent recruitment is insignificant in absence of mediator, which proves the importance of educational institutions towards the relationship. Pakistan's educational institutions are producing about two million fresh graduates of different disciplines to the labour market each year. Up till now, Pakistan's education system is not directing the huge bulk of these new masses to enter the labour market. The proposition of this study was focused towards the investigation and analysis pertaining to the relationship building between industry and academia for effective talent recruitment through promotion of employer brand equity in conjunction with educational institutions in Pakistan. The comparative analysis shows that the level of brand equity is almost similar for all the selected organizations, fresh graduates are not having any prioritise image of their employer in their minds indicating unawareness. Students were almost neutral for all the organizations depicting that they were not paying the required attention to create their strong employer image and to develop talent pools for them to select their future effective employees. Employer brand equity plays a magnificent role in making the fresh graduates mind set about their future employer. As per our model, educational institutions can be the best source for developing employer image in the minds of fresh graduates which has a weaker relationship in existing scenario. The study recommends the organizations to seek active mediating role of educational institutions to improve the effectiveness of new talent recruitment. In the future, the research can be carried out on developing the curriculum in partnership with the industry for the final year students to remove the gap. Moreover, future research focus can be on developing the organizational culture which attracts the fresh talent.
v3-fos-license
2019-07-05T13:58:05.517Z
2019-07-04T00:00:00.000
195795729
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://molecularbrain.biomedcentral.com/track/pdf/10.1186/s13041-019-0485-9", "pdf_hash": "c3a91433438ba59f55d37fdb321129036f9dd2e9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41671", "s2fieldsofstudy": [ "Biology" ], "sha1": "c3a91433438ba59f55d37fdb321129036f9dd2e9", "year": 2019 }
pes2o/s2orc
A primate-specific short GluN2A-NMDA receptor isoform is expressed in the human brain Glutamate receptors of the N-methyl-D-aspartate (NMDA) family are coincident detectors of pre- and postsynaptic activity, allowing Ca2+ influx into neurons. These properties are central to neurological disease mechanisms and are proposed to be the basis of associative learning and memory. In addition to the well-characterised canonical GluN2A NMDAR isoform, large-scale open reading frames in human tissues had suggested the expression of a primate-specific short GluN2A isoform referred to as GluN2A-S. Here, we confirm the expression of both GluN2A transcripts in human and primate but not rodent brain tissue, and show that they are translated to two corresponding GluN2A proteins present in human brain. Furthermore, we demonstrate that recombinant GluN2A-S co-assembles with the obligatory NMDAR subunit GluN1 to form functional NMDA receptors. These findings suggest a more complex NMDAR repertoire in human brain than previously thought. Introduction NMDA receptors are activated by coincident glutamate binding and intracellular depolarisation. Ca 2+ entry via NMDARs can gate long-term biochemical and gene expression changes that alter synaptic strength, which are proposed as central to mechanisms of memory storage [17] and neurodegenerative processes [9]. Our current knowledge of NMDAR function is largely derived from the study of rodent receptors and heterologous expression of cloned rodent genes. Tetrameric NMDARs comprise two obligatory GluN1 subunits and two GluN2 or GluN3 subunits, and in the adult forebrain GluN1/GluN2A, GluN1/GluN2B diheteromers, and GluN1/GluN2A/ GluN2B triheteromers are the most common [18,19]. The subunit combination confers the distinct biophysical and pharmacological properties to the receptor channel. The developmentally and anatomically regulated gene expression of NMDAR subunits, together with diverse post-translational modification mechanisms and protein interactions, determines the assembly, trafficking, synaptic or extrasynaptic localisation and internalisation of NMDARs (Reviewed in [16]) and their correct functioning is necessary for human brain functions [5,6,21]. Homologous rodent and human NMDARs do share highly conserved subunit sequences and exhibit almost identical pharmacological properties [10]. However, large scale open reading frame studies performed with mRNA from a mix of human tissues [20,28] have suggested that in addition to the conserved NMDAR canonical isoform of GluN2A in chordates, a shorter isoform is produced in humans (GluN2A-S) generated by alternative splicing of human GRIN2A (Fig. 1a). Here, we show that this alternative NMDAR isoform is expressed in the human and primate brain, and that it forms functional receptors together with the obligatory subunit GluN1 [15]. The presence of alternative NMDAR subunits not expressed in rodent model systems indicates the existence of unexplored neural mechanisms in human synapses with relevance to normal function, ageing and neurological disease. Results and discussion To test whether the GluN2A-S mRNA (GRIN2A) is expressed in human brain, we designed primers (Fw1/ Rv1) flanking the region of exon 13 containing the 343 base pairs (bp) present only in GRIN2A (Fig. 1a). We predicted two distinct amplicons (474 bp and 131 bp) that would distinguish the GRIN2A and GRIN2A-S transcripts, respectively (Fig. 1a). Following PCR using cDNA from human brain (Table 1) as template, we observed the presence of a~131 bp amplicon. In contrast, in mouse we observed one product of 474 bp, corresponding to the canonical isoform, using Fw1/Rv1 primers and a pair of primers modified to exactly match mouse Grin2A at the same location (mFw1/mRv1, Fig. 1b). If both short and long GRIN2A cDNAs are present in the human sample, the synthesis of shorter cDNA could overwhelm the early PCR cycles [27] and only generate the short amplicon. Using an additional GRIN2A specific reverse primer (Rv2), we confirmed the presence of canonical GRIN2A in this human cDNA sample (Fig. 1c). We observed the 131 bp band in further adult brain cDNAs tested with Fw1/Rv1 primer pair (Fig. 1d, two further samples not shown). Although its expression levels increase developmentally, GluN2A is expressed throughout the life course [2]. We tested foetal human brain cDNA and confirmed the expression of both GRIN2A and GRIN2A-S (Fig. 1e). Fig. 1 The GRIN2A gene has two transcript variants in human and primate but not mouse brain. a A short isoform of GRIN2A transcript was predicted by open reading frame studies. The published sequence of GRIN2A-S suggests that the final exon (exon 13) is missing two nucleotide regions compared to the canonical transcript: firstly, the lack of 343 nucleotides generates a putative exon 14 in GRIN2A-S (splice site shown in A i ) and the final 206 nucleotides of the canonical form are lost altogether. We designed primers to amplify the region of variance between the two isoforms ( Fig. 1, Fw1/Rv1) generating an amplicon of 474 bp in GRIN2A and 131 bp in GRIN2A-S. A second reverse primer was designed to amplify canonical GRIN2A selectively (Fig. 1a, Rv2) generating an amplicon of 380 bp. b-f RT-PCR amplification end products. Control conditions indicate no cDNA template was used in PCR b In human cDNA, only the short form of GRIN2A was observed likely due to preferential amplification in PCR, whereas only a long product of 474 bp was seen in mouse using either human or mouse specific primers. c The Fw1/Rv2 primer pair was used to confirm expression of canonical GRIN2A in the same human sample as shown in (B). d 3 other human cortical samples with GRIN2A-S amplified. e Both the short and long amplicons were observed in human foetal cDNA. f GRIN2A-S was observed in primate (Rhesus) brain cDNA. g Sequencing of human and primate RT-PCR short amplicons confirmed the presence of the putative splice site shown in Ai Thus, our data confirm the presence of both canonical GRIN2A and the novel GRIN2A-S transcripts in human brain tissue samples. A BLAST search of the 131 bp sequence amplified by primers Fw1/Rv1 provided primate-specific predictive hits. To confirm whether GRIN2A-S transcript is present in primate brain, we tested Rhesus macaque brain cDNA with Fw1/Rv1 primers and confirmed the presence of GRIN2A-S (Fig. 1f ). We sequenced the shorter PCR products for both the adult human and primate samples and confirmed the presence of the exact splice site reported in the European Nucleotide Archive (Coding: AAI17132.1; [20] (Fig. 1a,g). Furthermore, we aimed to evaluate whether the GRIN2A and GRIN2A-S transcripts were translated into the corresponding proteins. To this end, we hypothesised that if both transcripts are translated into proteins, two protein bands corresponding to GluN2A and Fig. 2 Two GluN2A protein bands are observed in human but not mouse brain. a Topology of the GluN2A subunit of the NMDA receptor and of GluN2A-S predicted from human mRNA studies. A spliced region is retained in canonical GluN2A leading to an alteration of the reading frame to generate a diverging C-terminal sequence (red) with early truncation. Epitopes for the antibodies used are shown in green and are numbered. b Immunoblots with specific antibodies against the canonical GluN2A and putative GluN2A-S protein in human and mouse cortical lysates. c GluN2A proteins were pulled down with an N-terminal antibody. Band 2 was cut from Coomassie-stained polyacrylamide gel and analysed by mass spectrometry. IP, immunoprecipitate. FT, flowthrough d A set of 14 peptides were confirmed to be present in this band. GluN2A-S in human homogenate would be immunodetected by a GluN2A antibody targeting an epitope conserved between the canonical and short GluN2A isoforms (Fig. 2a,b). Importantly, Western blot analysis from human brain homogenates confirmed the presence of two immunoreactive bands with a molecular weight corresponding to the predicted GluN2A and GluN2A-S isoforms. On the contrary, a single immunoreactive band with the high molecular weight (corresponding to the canonical GluN2A) was detected in mouse brain lysates (Fig. 2b). Furthermore, using an antibody specifically detecting a carboxy terminal epitope (exclusively present in the canonical GluN2A isoform), we detected the presence of a single band, with a molecular weight corresponding to the canonical GluN2A subunit (Fig. 2a, b). To confirm the identity of the low molecular weight band, we immunoprecipitated GluN2A from human tissue homogenates with an antibody against the conserved N-terminal domain and analysed the primate-specific band (Band 2, Fig. 2c) by mass spectrometry. This unbiased method allowed the identification of 14 unique peptides located within GluN2A residues 81-1022, confirming that this low molecular band contains the proximal part of GluN2A, and thus discarding the potential cross-reactivity of anti-GluN2A N-terminal antibody (Fig. 2d). To assess the relative levels of GluN2A-S vs. total GluN2A, human cortical homogenates from freshfrozen tissue resected from individuals 27-61 years of age were analysed by Western blot and the quantification showed that GluN2A-S immunoreactivity accounts for 34 ± 4% of canonical GluN2A protein in cortical human brain homogenate (Table 1) and this fraction remains constant across age (Fig. 2ei). Finally, to test whether GluN2A-S can be incorporated into functional NMDARs, we co-transfected HEK293 cells with plasmids expressing GluN2A-S and GluN1 subunits [15]. A slow voltage ramp delivered during local perfusion with 40 μM NMDA and 10 μM glycine elicited a typical Jshaped curve (Fig. 2fi), and subtraction of responses without agonist (leak subtraction) revealed a typical NMDA current with reversal potential near 0 mV (Fig. 2eii). This confirms that GluN2A-S subunits are able to assemble with GluN1 subunits and become inserted into the plasma membrane to form a functional NMDAR that likely plays a role in human neural function. Here we describe for the first time the brain expression of an uncharacterised, primate-specific NMDAR subunit. The splice site for GluN2A-S suggests that it will contain a diverging 19 aa sequence in its extreme Cterminal domain (Fig. 2a), in addition to lacking the distal carboxy terminal domain (183 amino acids) that contains PKC/SFK phosphorylation sites, two PDZ binding motifs that allow synaptic localisation [4,12,14], and a dileucin clathrin adaptor motif involved in receptor internalisation [13]. Following many lines of published evidence, these differences suggest that the dynamic regulation of GluN2A-S in response to stimuli could diverge from GluN2A subunit-containing NMDARs. This could impact the number of receptors present synaptically or extrasynaptically, the insertion of new receptors into the membrane, their lateral diffusion and clustering into synapses and their active removal. The potential changes in human synapses compared to mouse neurons void of GluN2A-S could result in distinct mechanisms involved in activity-dependent plasticity of synapses, which will highly depend upon its triheteromeric partners [1,8,19]. Together, our data suggest that GluN2A-S is a primatespecific NMDAR subunit and a substantial component of functional NMDARs in the adult human brain. Many neuronal mechanisms discovered in mice have been successfully recapitulated in humans. However, mounting evidence suggests that there are important differences between rodent and human neurons that result in distinct signal integration properties [22,23,26] and proteomic composition of synapses [3]. Species differences may ultimately impact the way in which human neural circuits can be computationally modelled [7], and the translation of pre-clinical findings into approved therapies [24]. Further analyses of GluN2A-S spatio-temporal gene expression and synaptic/ extrasynaptic localisation will enhance our knowledge of their functional role and may uncover NMDAR trafficking mechanisms present only in primates and diverging sequences may uncover novel therapeutic targets. Human brain tissue samples All samples consisted of cortical tissue resected for access to deeper brain lesions such as sclerotic hippocampus in epileptic patients (the pathological tissue for these lesions was not used). Informed consent was obtained from all patients to use surgically-resected putatively non-pathological tissue not required for diagnostic purposes (see ethical approval declaration). Briefly, resected tissue was obtained from temporal cortex of patients undergoing surgery for the removal of deeper structures. Tissue was collected in ice cold artificial cerebrospinal fluid [26] then taken to the laboratory, frozen and kept at − 80°C. Transfer time was of the order of 10-15 min. Mouse brain tissue Mice were decapitated following isoflurane anaesthesia (see ethical approval declaration). Brains were extracted in ice-cold ACSF and sliced or snap-frozen. All brain tissue samples were stored in the -80C freezer until lysed. RNA extraction and cDNA synthesis Total RNA was isolated from human and mouse tissue using Trizol and then purified using the RNeasy Mini kit (QIAGEN) following the manufacturer's instructions. cDNA was synthesised immediately from 200 ng of total RNA per reaction using the SuperScript IV reverse transcriptase and cDNA synthesis kit (INVITROGEN) according to the manufacturer's instructions. The cDNA obtained was stored at − 80°C. Human foetal cDNA was obtained from Takara (Normal brain tissue cDNA, pooled from 59 spontaneously aborted male/ female Caucasian fetuses, ages: 20-33 weeks). Rhesus macaque cDNA was obtained from Amsbio (Normal brain tissue, Female, 4.5 years). PCR conditions and primers PCR was performed on 1μg of cDNA using primers and REDTaq® Readymix™ PCR Reaction Mix (Sigma-Aldrich) for 40 cycles. DNA was denatured at 95 degree C and extended at 72 degree C for 45 s each cycle. Products were separated on a 1.5% agarose gel. GluN2A pulldown GluN2A was pulled down from 1 mg of protein homogenate (in RIPA buffer) using 2 μg of Alomone antibody AGG-002 beads. Eluate was run in SDS page and stained with Coomassie dye. The lighter band corresponding to putative GluN2A-S was cut and analysed by LC-MS/MS [11]. HEK293T cell recordings HEK293T cells were cultured at 37 C with 5% CO 2 in Dulbecco's Modified Eagle Medium with glucose, Lglutamine and pyruvate, 10% FBS and 1% Pen-Strep and seeded at low density onto poly-L-lysine coated glass coverslips for electrophysiology. Adherent cells were transfected using JetPEI reagent with NR1a and GluN2A-S plasmids at a 1:1 ratio and recorded after 48 h. Borosilicate glass micropipettes were pulled to produce a resistance of 4-6 mOhm and filled with intracellular recording solution containing in mM: Gluconic acid 70, Caesium chloride 10, sodium chloride 5, BAPTA 10, HEPES 10, GTP 0.3 ATP 4 and pH balanced to 7.3 with caesium hydroxide. Cells were perfused with aCSF throughout recording containing, in mM: sodium chloride 126, calcium dichloride 2, glucose 10, magnesium sulfate 2, potassium chloride 3, NaH 2 PO 4 1.25 and NaHCO 3 26.4, and glycine 10 μM and pH regulated by continuous bubbling of 95% oxygen, 5% CO 2 . Recordings with or without addition of NMDA 40 μM were made in whole-cell voltage clamp and Matlab software and amplified using an Axopatch 200B as previously described [28].
v3-fos-license
2020-01-29T16:29:11.090Z
2020-01-28T00:00:00.000
210938491
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-020-1021-7", "pdf_hash": "0be43ef54f77e62ce182039fe149f1287b4dd784", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41672", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0be43ef54f77e62ce182039fe149f1287b4dd784", "year": 2020 }
pes2o/s2orc
Design and implementation of a clinical decision support tool for primary palliative Care for Emergency Medicine (PRIM-ER) Background The emergency department is a critical juncture in the trajectory of care of patients with serious, life-limiting illness. Implementation of a clinical decision support (CDS) tool automates identification of older adults who may benefit from palliative care instead of relying upon providers to identify such patients, thus improving quality of care by assisting providers with adhering to guidelines. The Primary Palliative Care for Emergency Medicine (PRIM-ER) study aims to optimize the use of the electronic health record by creating a CDS tool to identify high risk patients most likely to benefit from primary palliative care and provide point-of-care clinical recommendations. Methods A clinical decision support tool entitled Emergency Department Supportive Care Clinical Decision Support (Support-ED) was developed as part of an institutionally-sponsored value based medicine initiative at the Ronald O. Perelman Department of Emergency Medicine at NYU Langone Health. A multidisciplinary approach was used to develop Support-ED including: a scoping review of ED palliative care screening tools; launch of a workgroup to identify patient screening criteria and appropriate referral services; initial design and usability testing via the standard System Usability Scale questionnaire, education of the ED workforce on the Support-ED background, purpose and use, and; creation of a dashboard for monitoring and feedback. Results The scoping review identified the Palliative Care and Rapid Emergency Screening (P-CaRES) survey as a validated instrument in which to adapt and apply for the creation of the CDS tool. The multidisciplinary workshops identified two primary objectives of the CDS: to identify patients with indicators of serious life limiting illness, and to assist with referrals to services such as palliative care or social work. Additionally, the iterative design process yielded three specific patient scenarios that trigger a clinical alert to fire, including: 1) when an advance care planning document was present, 2) when a patient had a previous disposition to hospice, and 3) when historical and/or current clinical data points identify a serious life-limiting illness without an advance care planning document present. Monitoring and feedback indicated a need for several modifications to improve CDS functionality. Conclusions CDS can be an effective tool in the implementation of primary palliative care quality improvement best practices. Health systems should thoughtfully consider tailoring their CDSs in order to adapt to their unique workflows and environments. The findings of this research can assist health systems in effectively integrating a primary palliative care CDS system seamlessly into their processes of care. Trial registration ClinicalTrials.gov Identifier: NCT03424109. Registered 6 February 2018, Grant Number: AT009844–01. Background The emergency department (ED) represents a critical decision point in the trajectory of care for patients with serious life-limiting illness, as three-quarters of these patients visit the ED in the 6 months before death [1]. These visits can indicate worsening clinical and functional status, which often occur in the setting of a breakdown in the coordination of care [2,3]. A gap exists in the delivery of goal-concordant care, with steadily increasing intensive care admissions from the ED despite the fact that most patients with serious illness prefer to be home at the end of life [4,5]. Equipping emergency providers with basic skills and competencies in palliative care, commonly termed primary palliative care, affords an opportunity to align care trajectory with patient goals. Providing palliative care in the ED has been demonstrated to improve quality of life, decrease intensive care unit admissions, decrease inpatient hospital length of stay, improve symptom management, and decrease cost [6][7][8][9][10][11]. However, referral to palliative care remains low, with 78.5% of emergency medicine (EM) providers reporting they refer patients with palliative care needs less than 10% of the time, and only 10.8% of providers feel they use effective methods to screen or refer patients to palliative care [12]. Emergency providers have previously identified time constraints and implementation logistics as the most challenging limitations to providing palliative care services in the ED. [13] Despite these challenges, interest in education, training, and delivery of palliative care in the ED continues to grow [13][14][15]. To optimally deliver primary palliative care in the ED, patients who could benefit from these services must be quickly, and accurately, identified. Existing tools to efficiently identify patients who may benefit from palliative care in high-acuity settings are currently limited to multitier screening tools which involve additional staffing to assist in patient identification, or rely heavily on emergency provider judgement [16][17][18]. With the pervasiveness of electronic health records (EHR), institutions can leverage electronic clinical decision support (CDS) to assist providers in identifying patients most likely to benefit from primary palliative care and provide point-of-care clinical recommendations [19,20]. These CDS tools have dramatically evolved over the past 25 years to support diagnosis, treatment, care-coordination, and prevention [21]. Development of a palliative care CDS tool would assist in the sensitive, rapid, and efficient identification of adults who may benefit from palliative care, rather than relying on providers to identify such patients on an ad hoc basis. Additionally, these tools may improve quality of care by assisting with adherence to guidelines to reduce variance in provider practice. Specifically, they could provide targeted recommendations-such as consultation to multi-disciplinary palliative care teams and medication recommendations. As such, we sought to determine if it was feasible and usable to create a novel CDS tool adapted from an existing palliative care screening tool entitled P-CaRES. Furthermore, we aim to describe the design, implementation, and monitoring of a CDS tool as part of a National Institutes of Health (NIH)-funded pragmatic trial aimed at improving quality of care for patients with serious life-limiting illness in diverse ED environments that vary in specialty geriatric and palliative care capacity, geographic region, payer mix, and demographics. Design and implementation An Emergency Department Supportive Care CDS tool (Support-ED), was developed at the Ronald O. Perelman Department of Emergency Medicine at NYU Langone Health (NYULH). The system was developed as part of an institutionally sponsored Value-Based Management initiative and an NIH grant titled, "Primary Palliative Care for Emergency Medicine (PRIM-ER)." [22]To develop Support-ED, creators of the tool devised a multistep process to ensure a comprehensive and practical tool was implemented. These steps included: 1) a scoping review of existing ED palliative care screening tools; 2) creation of a multidisciplinary workgroup to identify patient screening criteria and appropriate referral services; 3) initial design and usability testing using the System Usability Scale (SUS) questionnaire; 4) education of the ED workforce on the Support-ED background, purpose and use, and 5) the creation of a dashboard to monitor frequency of alert firing and correlation with targeted actions. This study was approved by the New York University School of Medicine Institutional Review Board. Scoping review As an initial step, a scoping review of validated screening tools for unmet palliative care needs in the ED was conducted in March 2018 by author AT utilizing Pubmed. Based on the review, the Palliative Care and Rapid Emergency Screening (P-CaRES) was identified as the only screening tool to meet our search criteria and identify emergency patients with serious, life-limiting illness who could benefit from palliative care services [12]. P-CaRES consists of a two part screening process. The initial step screens for life-limiting conditions, including end-stage organ disease, advanced cancer, septic shock or multiorgan failure in elderly patients, or a high chance of accelerated death (e.g. cardiac arrest). The second step of the tool screens for functional decline, uncontrolled symptoms, caregiver distress, or provider gestalt regarding limited prognosis [10]. Other palliative care screening tools were excluded from consideration since they were not validated and/or were only tested at a single site. The P-CaRES framework was subsequently used to identify structured clinical data points within the EHR that could be used as triggers for a CDS tool. Multidisciplinary workgroup A workgroup comprised of seven individuals-one emergency/pallative care physician, two clinical operations physician leaders, an emergency medicine physician with information technology expertise, a nurse informaticist, a care manager and a social worker with expertise in the ED were assembled to participate in a think aloud methodology. The meeting objectives and tasks were to provide recommendations on; 1) screening criteria, 2) targeted recommendations including consultation to palliative care or social work services; and 3) design specifications such as how, when, and for whom an alert would be generated. Weekly meetings were conducted in-person in order to obtain valuable insights on the CDS creation process. Notes were taken during each of the nine in-person meetings and data was reviewed following each meeting to extract themes. When disagreements among the group occurred, group discussions were held until consensus was reached across all stakeholders. Screening criteria The P-CaRES screening criteria was modified and adapted by the workgroup to initially meet the specific needs of the NYULH workflow and population, with the ultimate goal of developing a tool that could be applied across the 35 diverse EDs enrolled in the PRIM-ER intervention. To identify patients who would benefit from palliative care, the workgroup systematically reviewed each of the components of the P-CaRES tool and identified structured clinical data points within the EHR that would serve as a surrogate within the Support-ED tool. For example, in place of "end stage renal disease", a "Glomerular Filtration Rate (GFR) < 15 ml/min/m 2 " was selected which could be easily extracted from the EHR. The finalized criteria included historical data points that were surrogates for serious life-limiting illness, such as the presence of an advanced care planning document and critical lab values extracted from the current ED encounter. The aim for the selection of these criteria was to accurately identify serious illness and capture those patients who could benefit from primary palliative care interventions and referral to services while remaining specific enough to prevent over-firing and alert fatigue. Targeted recommendations Once Support-ED identified a qualifying patient, emergency providers received an alert to initiate a preliminary goals-of-care discussion and to consider a referral to the appropriate consult services. To determine if a referral was required, clinical questions modeled from the P-CaRES tool, such as measurements for worsening functional status, the presence of uncontrolled symptoms, or unclear goals of care, were asked of the providers. If the response was "yes", the alert recommended a referral to palliative care service and/or social work. The ED social workers, who serve as the institutional liaisons with community hospice agencies, also received automatic notifications for any patients presenting with a history of prior hospice enrollment. The multidisciplinary workgroup based the referral system options on clinical practice scope and local capacity of the ED care team and referral services. Design specifications After establishing the clinical firing criteria and follow-up workflow, the workgroup used an iterative design process to construct the Support-ED framework. Primary design considerations included 1) interruptive vs. non-interruptive alerts, 2) alert timing and 3) alert audience. 1. Interruptive vs. Non-Interruptive Alerts: Considering the high pace and patient volume of the emergency department, the workgroup determined interruptive alerts would be more effective than noninterruptive alerts for providers. Interruptive alerts force emergency providers to temporarily pause their workflows to acknowledge the trigger. To prevent alert fatigue, specificity was emphasized over sensitivity to avoid over-firing. Non-interruptive alerts were employed for social workers and care managers given differences in their workflows. 2. Alert Timing: To ensure emergency providers received sufficient time to evaluate patients and analyze pertinent clinical data, the alert identifying patients with serious illness fired 1 hour after provider assignment. This timing allowed the emergency provider the opportunity to review the patient's record before initiating a goals of care conversation. In contrast, the alert notifying emergency triage nurses of active advance care planning documentation for critically ill patients fired immediately upon chart opening so this information could be urgently relayed to the treating provider to affect care trajectory. 3. Alert Audience: Rather than solely targeting emergency providers, alerts were established for emergency nurses, social workers, and care managers as well. Each alert served a different purpose that aligned with the specific roles and practice scope of each personnel type. In addition to serving a clinical purpose, Support-ED promoted teamwork and a collaborative approach. Usability testing Prior to activating Support-ED within the NYULH EHR (PRIM-ER Pilot Site), think-aloud usability testing was conducted with a cohort of ED staff, including nurses, physicians, and clinical operations leadership between August and September 2018. During these sessions, testers explored multiple clinical scenarios within the EHR test environment that elicited different alerts. The scenarios included active advance care planning documents, active hospice, or the identification of patients with a possible serious illness that would fire an alert depending on the role of the user as either a provider, nurse, or social worker. Participants then verbalized any questions or issues they identified to the facilitators during these sessions. Upon completion of the scenarios, participants completed the standard System Usability Scale (SUS) questionnaire regarding their summative experiences [23,24]. The SUS is a 10 item questionnaire with one of five responses that range from Strongly Agree to Strongly Disagree [24]. The SUS tests perceived usability and ease of use, as well as learnability [25,26]. The final score is calculated by subtracting 1 from odd numbered questions, and for even numbered questions subtracting the value from 5. Sum of the scores is then multiplied by 2.5 for a final score [23]. SUS scores have a range of 0-100 and a score above a 68 is considered above average [24,27]. Verbal feedback from these sessions was then presented to the workgroup and modifications were incorporated into the tool prior to launch. Education of ED workforce As part of PRIM-ER, evidence-based, multi-disciplinary primary palliative care education and simulation-based workshops are a major component of the study. All full-time emergency attendings and physician assistants completed an online didactic course on primary palliative care knowledge and skills in needs assessment and referral. This was supplemented by a simulation workshop in end-of-life communication carried out by a group of emergency physicians with expertise in palliative care. Similarly, all full-time emergency nurses completed an online nurse-focused didactic course on primary palliative care knowledge and skills. Further details regarding the PRIM-ER protocol can be found in the previously published study protocol paper [22]. Immediately following these sessions, education on Support-ED was provided including the purpose and specific workflows associated with each of the alerts. Monitoring The final component of PRIM-ER includes audit and feedback, which includes monitoring of the CDS tool. At NYULH, a clinical dashboard was created utilizing Tableau software (Version 2019.2.2) [28] to track the frequency in which each alert fires as well as the number of consults that are ordered as a result of each distinct alert at a departmental level. The dashboard also captures qualitative data through a comment text field that is prompted when a provider acknowledges that they are overriding one of the alerts and they are required to input their rationale. This dashboard is monitored on a weekly basis by PRIM-ER researchers (AT, JS) following "go-live" of Support-ED. Findings are disseminated to the multidisciplinary workgroup and ED leadership on a biweekly basis and informed future adaptations of the tool. Results Based on feedback and notes directly obtained from the workgroup meetings, three alerts within the Support-ED tool were generated. To gain further feedback and buyin, stakeholders including ED service chiefs, hospital leadership, and leadership of each of the affected service lines provided additional perspective and insight. The iterative design process yielded three distinct patient scenarios that trigger an alert to fire. For each distinct alert that was developed the triggering criteria, target provider, and response options are outlined in Table 1. Detailed description of each alert type in Table 1 is outlined below. Subsequently, screenshots from the EHR that demonstrate each specific alert are included in Figs. 1, 2, 3. ESI Emergency Severity Index-a clinical triage acuity scoring tool between 1 and 5 1 = "most severe/urgent." Alert #1: advance care planning document present The first alert is purely informational and is triggered by the presence of an advance care planning document within the EHR with an accompanying ESI (Emergency Severity Index-a clinical triage acuity scoring tool between 1 and 5 with 1 = "most severe/urgent.") of 1 or 2 for the nurses and providers [29]. This alert targets nurses since they are typically the first to access a patient's chart and could rapidly relay the information to a provider to ensure that the care provided would be aligned with the patient's previously specified wishes. An alert (Fig. 1) subsequently fires for emergency providers recommending the initiation of a goals of care conversation and consideration of a palliative care and/or social work consult. Alert #2: hospice The second alert (Fig. 2) is triggered by a previous discharge disposition to home or inpatient hospice. If the patient presented to the ED subsequent to the enrollment with hospice services, an informational alert fires for the providers. A similar alert also fires for the ED care managers and social workers to initiate care coordination, address social needs, and to contact the hospice agency. Alert #3: serious life-limiting illness without advance care planning documentation The third alert (Fig. 3), designed to identify patients with serious life-limiting illnesses, is triggered by the historical data points and current encounter data points as show in Table 2. The presence of any of these data points triggers an alert for providers recommending the initiation of a goals-of-care conversation and consideration of a palliative care and/or social worker consult. In addition, to reduce immediate dismissals of the three alerts and to improve adherence, we required providers to manually input a reason for not following the advised action. This forced providers to pause and consider their decision before moving forward in overriding the alerts. Usability testing Ten ED staff (physicians n = 7, nurses n = 3) completed the PRIM-ER SUS test on August 9, 2018. The users scored an average of 92.5 (75-100, SD = 7.56). A minimum average score of 85 is considered "excellent," for perceived ease of use [16] Based on these results, the PRIM-ER research team felt confident in moving the CDS into the initial testing phase. Initial testing, implementation, and adaptations To ascertain the frequency of the alerts firing, the Support-ED tool ran in the background while alerts to providers were silenced for 4 weeks in September 2018 prior to the formal launch or "go-live." During this period, the three alerts together fired 844 times out of approximately 9000 total encounters across three clinical sites (9% of encounters). During the initial 4-week period following launch of the tool, alert frequency was closely monitored, as well as correlations to consults ordered (Table 3). In addition, when the alert recommendation was not followed, qualitative feedback from providers was collected. The qualitative data extracted from the dashboard revealed insights and rationales for why providers were overriding the alerts. Some examples included, but are not limited to, "will order pending family discussion," "have not yet evaluated patient," "patient acutely ill," "ED patient without significant risk of death in ED," "not a member of the primary care team," and "I am seeing her as an outpt [outpatient] consultant after discharge." Within the override comments open text field, some providers used this area as an opportunity to express concerns. For example, one provider noted "please stop alerting me I just got here. I do not know if they need this yet. This is very disruptive to flow." Additionally, another provider expressed that, "alarm fatigue is dangerous." The dashboard monitoring of both the quantitative and qualitative data proved invaluable, as it was the impetus for improving the tool's specificity and acceptability. Based on the early data obtained from the dashboard, modifications were made rapidly to maximize buy-in and minimize alert fatigue and are described in Table 4. Resulted from provider feedback regarding the lack of utility of this firing on lower acuity patients. If a palliative care consultation was already placed, Alert #3 does not fire Amended to reduce the redundancy of orders. Update all three alerts to fire for all the providers on the ED care team Goal was to notify each of the providers on the ED care team instead of for example, only the attending provider. Discontinue all three alerts from firing for providers that are not part of the ED care team (e.g. consultants) Amended to target the right provider. Update all three alerts to fire only once for each ED provider Amended to reduce the redundancy of alert firing. Firing of Alert #1 and Alert #3 changed from T + 60 min to T + 90 min after ED arrival Based on provider feedback recommending firing later to allow sufficient time for patient evaluation and analysis of lab results. Removal of "previous discharge disposition to nursing home" and "GFR < 15 ml/min/m2" from criteria for Alert #3 Based on dashboard feedback, these two criteria led to the most frequent firing and thus, these two were removed to increase alert specificity. Suspension of Alert #3 Based on negative comments and over-firing, the decision was made to suspend this alert. Discussion The Support-ED clinical decision support tool was developed to address a need for clinically relevant and timely identification of patients with palliative care needs in the ED coupled with actions relevant to care providers in this setting. Important lessons were learned from the development and initial launch of Support-ED at NYULH. Primarily, the modification of the P-CaRES screening survey into a functional CDS tool was scored highly for usability, as indicated by the SUS score of 92.5 (75-100, SD = 7.56) by NYULH ED providers. This is similar to the usability and acceptability testing of other ED palliative care screening tools. For example, 80.5% of emergency providers who tested the P-CaRES tool felt it would be useful for their practice [12], and 70% of providers indicated a content-validated palliative care screening tool developed by Ouchi et al. (2017) was acceptable [32]. In contrast, the percentage of encounters that identified palliative care needs from the CDS tool (9%) is much lower compared to the 32% positive screening found during feasibility testing of the Ouchi et al. tool. However, it is difficult to compare Support-ED with the Ouchi et al. tool since Support-ED is a realtime CDS tool, compared to a retrospective survey applied for the Ouchi et al. tool [32]. The importance of the dashboard for audit and feedback was critical to refining and monitoring our CDS tool. As suggested by Wright et al., the "importance of monitoring and evaluating decision support interventions after they are deployed and improving them continuously" cannot be overemphasized [33]. In our case, this information provided the workgroup with critical information that informed the amendments and modifications that were made to the tool. Despite the workgroup's best efforts at modifying the "Serious Life-Limiting Illness with No Advance Care Planning Document" alert, feedback continued to be negative with persistent over-firing and thus, this alert was eventually suspended. One important consideration that was subsequently deliberated by the workgroup was the targeted outcome for the alerts. A positive screen was initially defined as those alerts that generated a consultation to either palliative care or social work. Given the focus of the study interventions on providing the ED care team with the tools to improve primary palliative skills and to carry out goals of care conversations without relying on consultants, targeting this outcome primarily proved to be inaccurate. A subsequent quality improvement project focused on the creation of an advance care planning note specifically for emergency providers. Utilizing this as an additional outcome measure may more accurately capture the progress of emergency providers in carrying out these conversations. Overall, there is a dearth of literature on CDS development and implementation in the context of palliative care, as well as integration in an ED setting [34]. To date, what is known is that CDS tools can play an integral role in assisting healthcare providers in providing patients optimal care by providing patient-specific recommendations at the point of need [11,35]. Using the current body of literature on CDS challenges [36] and guidelines including best practices and principle guidelines outlined by Wright et al. [33] and the GUIDES checklist [37], we anticipated and developed strategies to overcome these challenges. Leveraging key stakeholders, aligning with organizational priorities and goals, and employing an iterative process for continual improvements and monitoring the usability led to our success [33]. Limitations There were several limitations we encountered during the design and implementation of this CDS tool. There is currently little evidence available in the literature describing which data elements should be utilized to identify patients with serious life-limiting illness within the EHR-thus we used expert consensus to determine which computer-interpretable data elements were utilized within the alerts. Second, CDS tools universally encounter barriers to acceptance and adoption by providers. Based on empirical studies and current recommendations, a major key to success for CDS tools is the integration into clinical workflow [35]. There is no standard for clinical workflow in the ED given the diversity of patient presentations and individual practice patterns making seamless integration of the alert into workflow challenging. This may have led to provider frustration and alert fatigue and as a result, less adherence to alert recommendations. Adaptation and future directions To date (as of December 2019), the PRIM-ER research team has successfully aided in the implementation of a tailored Support-ED within the first 10 sites enrolled in the PRIM-ER intervention. Each site will deploy Support-ED in a stepped-wedge, randomized design [22]. To best tailor Support-ED to the unique workflows and environments of each participating site, interviews were conducted with key local stakeholders comprised of palliative care, emergency nursing, social work/case management, informatics, and ED operations representatives. These interviewers acquired information on health system palliative care resources, palliative care-related CDS tools currently utilized in the EHR, how to best customize the CDS to existing workflows, cultural norms that might alter receptivity to CDS, the CDS approval process, and current audit and feedback or quality metric reporting tools. To adapt Support-ED to each health system's unique workflows and best assimilate the feedback of all stakeholders, a CDS mapping document was created (Fig. 4). This modifiable document allows stakeholders to select, remove, and insert additional criteria to trigger the alerts and elicit specified outcomes. This document also will allow stakeholders to restructure and modify elements of the tool, such as the timing of the alerts firing or format of alerts to best serve their institutional needs. At each enrolled site, we gather written feedback from all stakeholders, and that feedback is compiled into a single mapping document and returned to each site for further refinement. From there, the NYULH team shares the baseline build with other institutions using the same EHR to provide sites with modifiable variables and template logic for ease of reproducibility and adaptation. As not all sites utilize Epic, Support-ED is designed to be EHR-agnostic so that it can be generalizable to all sites within the intervention, as well as other sites not enrolled. For example, one of the study sites recently implemented a modified version of the Support-ED within Cerner's EHR, utilizing the NYULH build as a foundation to customize their CDS tool. Upon study completion at all 35 sites, the overall goal is to share Support-ED more broadly through proper dissemination strategies so other health systems can tailor and adapt the build to their EHRs. Conclusions CDS alerts can be an effective tool in the implementation of primary palliative care quality improvement best practices. Health systems should thoughtfully consider tailoring and customizing their CDS tool in order to adapt to their unique workflow and environments. The findings of this research can assist health systems in the effective adaptation and seamless integration of a primary palliative care CDS tool into their standards of care.
v3-fos-license
2021-07-29T13:18:27.929Z
2021-07-29T00:00:00.000
236477160
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.719519/pdf", "pdf_hash": "3fe84ffb8c5ca28c752344623b050534971b88ba", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41673", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3fe84ffb8c5ca28c752344623b050534971b88ba", "year": 2021 }
pes2o/s2orc
Prescription of Radix Salvia miltiorrhiza in Taiwan: A Population-Based Study Using the National Health Insurance Research Database Objective: While radix Salvia miltiorrhiza (Danshen; RSM) is commonly used in Chinese herbal medicine, its current usage has not yet been analyzed in a large-scale survey. This study aimed to investigate the conditions for which RSM is prescribed and the utilization of RSM in Taiwan. Methods: 1 million beneficiaries enrolled in the Taiwan National Health Insurance Research Database were sampled to identify patients who were prescribed RSM. Next, the diagnoses of these patients based on the International Classification of Diseases 9th Revision Clinical Modification code were analyzed. Logistic regression analysis was employed to estimate the odds ratio (OR) for RSM utilization. Results: Patients with disorders of menstruation and abnormal bleeding from the female genital tract due to other causes were the diagnostic group most commonly treated with RSM (9.48%), followed by those with general (9.46%) and cardiovascular symptoms (4.18%). Subjects treated with RSM were mostly aged 35–49 years (30.1%). The most common combination of diseases for which RSM was prescribed (0.17%) included menopausal disorders and general symptoms. Women were more likely to receive RSM than men (OR = 1.75, 95% confidence interval = 1.73–1.78). RSM was frequently combined with Yan-Hu-Suo and Jia-Wei-Xiao-Yao-San for clinical use. Conclusion: To date, this is the first study to identify the most common conditions for which RSM is used in modern Taiwan. The results indicate RSM as a key medicinal herb for the treatment of gynecological diseases, including menstrual disorders, female genital pain, menopausal disorders, etc. The most common combination for which RSM is prescribed is menopausal disorders and general symptoms. Further research is needed to elucidate the optimal dosage, efficacy, and safety of RSM. INTRODUCTION Salvia miltiorrhiza (Danshen) is a deciduous perennial plant and its roots are highly valued in traditional Chinese medicine (TCM) (Zhou et al., 2005). Radix Salvia miltiorrhiza (RSM) is one of the most widely used medicinal herbs in China and is now exported to other countries (Hu et al., 2005). It is ranked as a "super grade" medicine in the first official book of Chinese herbal drugs, Shen Nong Materia Medica. RSM is historically known to have beneficial effects on the circulatory system and has been listed in the official Chinese Pharmacopoeia for the treatment of menstrual disorders and blood circulation diseases as well as prevention of inflammation (Matkowski et al., 2008). Chinese herbal products (CHPs), administered as complementary therapies, have gained widespread popularity in Taiwan. Danshen CHP is indicated for eliminating blood stasis to enhance flow, promoting blood circulation, and regulating menstruation at a daily dose of 1.2-3.6 g in adults. (https:// service.mohw.gov.tw/DOCMAP/CusSite/TCMLResultDetail.aspx? LICEWORDID 01&LICENUM 007924#). Danshen was the most commonly used single CHP for ischemic stroke (Hung et al., 2015). However, only a few large-scale pharmacoepidemiological studies have investigated the clinical utilization of RSM. No nationwide population-based surveys have previously been conducted to examine the characteristics of RSM use. The National Health Insurance (NHI) has provided a universal health insurance program in Taiwan since 1995; this covers both Western medicine and TCM. Almost 98% of all the inhabitants of Taiwan were covered by the NHI program at the end of 2002 . Therefore, a nationwide population-based study was conducted by analyzing a cohort of one million sampled patients from the NHI Research Database (NHIRD) in Taiwan from 2000 to 2011. The purpose of this study was to investigate the frequency and characteristics of RSM prescriptions to identify the conditions for which this CHP is prescribed. The results of this study provide valuable information for further pharmacological studies and clinical trials. Data Sources There are approximately 25.68 million individuals registered in the NHI program in Taiwan (Li and Huang, 2015). This study used data from the Longitudinal Health Insurance Database 2000, a dataset of the NHIRD, which includes all claims data . The Longitudinal Health Insurance Database 2000 included 1 million randomly selected individuals from the 2000 Registry of Beneficiaries within the NHIRD. The data related to patient identification were encrypted to protect the privacy of all subjects. All outpatient medical information, such as demographic details (gender, date of birth, income status, and urbanization of living area), primary and secondary diagnoses as per the International Classification of Diseases 9th Revision Clinical Modification (ICD-9-CM), procedures, prescriptions, and medical expenditures from 1996 to 2011 are recorded in the NHIRD (Shih et al., 2014). This study was exempted from review by the Internal Review Board of China Medical University and Hospital (CMUH104-REC2-115). Study Design In Taiwan, TCM doctors are asked to diagnose a condition based on the ICD-9-CM code (Chien et al., 2013). In this study, ICD-9-CM codes for all patients prescribed RSM were collected. Initially, all 125,566 individuals who received RSM between 1996 and 2011 were selected. Then, all those who received RSM before 2000 were excluded from the study because of diagnosis based on A-code. The case group finally included 104,512 RSM prescriptions, which were recorded for people who used RSM after 2000 for the first time. A control group of 651,214 subjects was selected from those who visited TCM clinics, but had never used RSM, by randomly selecting subjects with the same TCM clinic visit date as those in the case group, i.e., between 2000 and 2011 ( Figure 1). Statistical Analysis The distribution and comparison of the demographic characteristics of the case and control groups are presented in this study. The odds ratio (OR) and 95% confidence interval (CI) for RSM and RSM-associated risk factors were calculated using multivariable logistic regression, after adjusting for age, gender, urbanization level, occupation, and monthly income. Urbanization was grouped into four levels, with those living in the most urban areas categorized as level 1 and those living in the least urban areas categorized as level 4, as reported by Liu et al. (2006). Occupation was classified as army/education/public sector, farming, fishing, industry, business, or other. Monthly income was grouped into three bands corresponding to a minimum monthly wage of ≤15,840,15,900,and >21,900 New Taiwan Dollars (NTD). The corresponding prescription files were also analyzed and an association rule was applied to evaluate the co-prescription of RSM and other CHPs. The core patterns of disease in RSM users were used as an open-sourced freeware NodeXL (http://nodexl. codeplex.com/) for network analysis. The top-ranked disease and co-disease were used as the widest line, the top 2-5 diseases and co-diseases were used as wide lines, and other combinations were used as thick lines. Statistical significance was set at p < 0.05 for all analyses, and all p-values were two-tailed. SAS version 9.4 (SAS, Cary, NC, United States) was used for statistical analysis. groups treated with RSM were as follows: patients aged 35-49 years (N 31,488; 30.1%), patients aged 20-34 years (29.6%), and patients aged 50-64 years (18.9%). Adults were over 1.9-fold more likely to use RSM than subjects aged <20 years. In addition, women were prescribed RSM more frequently than men (women: men 1.88: l), with an OR of 1.75 (95% CI 1.73-1.78). The majority of RSM users lived in highly urbanized areas of Taiwan (N 34,163; 32.7%). Most of the RSM users belonged to the business sector (N 48,880; 46.8%). Compared with farmers, subjects working in the army/ education/public sectors were significantly more likely to be prescribed RSM (OR 1.27; 95% CI 1.22-1.31), followed by those who worked in business (OR 1.14; 95% CI 1.11-1.17), industry (OR 1.13; 95% CI 1.09-1.16), and other sectors (OR 1.13; 95% CI 1.09-1.17). The monthly income of subjects who were prescribed RSM was ≤15,840 NTD (N 41,300; 39.5%). Compared to subjects with this monthly income, those with a high monthly income showed an increase in OR of RSM usage, from 1.04 in those earning 15,841-21,900 NTD to 1.13 in those earning >21,900 NTD. The median daily dose of RSM was 1.5 g, and the most common frequency of administration was three times a day (84.4%). Of the 385,656 TCM visits in Taiwan, the top 10 diseases treated using RSM from 2000 to 2011 are presented in Table 2. The most common diagnosis for RSM users was "Disorders of menstruation and other abnormal bleeding from the female genital tract (ICD-9-CM: 626)" (N 36,566; 9.48%), followed by "General symptoms (ICD-9-CM: 780)" (N 36,497; 9.46%) and "Symptoms involving the cardiovascular system (ICD-9-CM: 785)" (N 16,117; 4.18%). In disorders of menstruation and other abnormal bleeding from the genital tract of women, the formula and single CHPs most commonly prescribed with RSM were Jia-Wei-Xiao-Yao-San (JWXYS) (27.1%) and Yi-Mu-Cao (34.5%), respectively. In patients with general symptoms, the most commonly prescribed single and formula CHPs with RSM were JWXYS (18.7%) and Ye-Jiao-Teng (14.1%). In patients with symptoms involving the cardiovascular system, the most commonly prescribed single and formula CHPs with RSM were Zhi-Gan-Cao-Tang (43.2%) and Yu-Jin (15.2%). DISCUSSION This nationwide population-based study was designed to investigate the conditions for which RSM is commonly prescribed by licensed TCM doctors. The present study showed that RSM was most frequently prescribed for patients with disorders of menstruation and other abnormal bleeding from the female genital tract (ICD-9-CM: 626) in Taiwan. This may be because TCM doctors considered the function of RSM to be similar to that of Si-Wu-Tang. In ancient times, it was believed that the function of RSM was similar to that of Si-Wu-Tang, which has been used as a classical formula to treat menstruation disorders. From this perspective, it is easy to understand why RSM has been widely used in the treatment of gynecological diseases (Zheng et al., 2015). RSM was traditionally used to remove stasis and relieve pain, activate blood to promote menstruation, clear heart fire, and cause tranquilization (Yuan et al., 2015). A previous study has revealed that RSM is the most frequently prescribed single CHP for menopausal syndrome (Chen et al., 2011). The results of this study showed that "Menopausal and postmenopausal disorders" and "General symptoms" was the most common combination of two diseases for which RSM is prescribed ( Table 3). Tanshinone IIA (one of the main constituents of RSM) exerts several beneficial effects for the treatment of postmenopausal symptoms, including cardiovascular protection, prevention of bone loss, prevention of skeletal muscle loss, and anti-carcinogenicity; these involve the binding of tanshinone IIA to estrogen receptors (Zhao et al., 2015). RSM also exerts estrogenic effects by stimulating the biosynthesis of estrogen in circulation, increasing the expression of estrogen receptors in target tissues, and activating estrogen receptor-estrogen response elementdependent pathways (Xu et al., 2016). An ethanol extract of RSM has been reported to suppress trabecular bone loss by inhibiting bone resorption and osteoclast differentiation in menopausal mouse models; therefore, it is thought to be a potential agent for the treatment of osteoporosis (Lee et al., 2020). The main water-soluble compounds in RSM, salvianic acid A and salvianolic acid B, may play a role in the RSMmediated treatment of infertility by ameliorating oxidative stressinduced damage in H 2 O 2 -exposed human granulosa cells by inhibiting the overexpression of cleaved caspase-3, cleaved caspase-9, and tumor necrosis factor-α (Liang et al., 2021). Real-world data from the Taiwan NHIRD revealed that RSM exerted protective effects on patients with breast cancer. Additionally, dihydroisotanshinone I, a chemical constituent of RSM, has been reported to suppress the proliferation of breast cancer cells through apoptosis and ferroptosis (Lin et al., 2019). Therefore, RSM is a key medicinal herb for gynecological diseases. The top three gynecological conditions that are treated with radix Salvia miltiorrhiza include menstrual disorders and abnormal bleeding from the female genital tract (66.6%); pain and other symptoms associated with female genital organs (15.6%); and menopausal and postmenopausal symptoms (7.82%). ( Table 6). The second most frequent diagnosis in patients prescribed RSM in Taiwan was "General symptoms (ICD-9-CM: 780)." The most common conditions in RSM users with "General symptoms" was "Sleep disturbances (ICD-9-CM: 780.5)" (N 25,249; 69.18%), followed by "Dizziness and giddiness (ICD-9-CM: 780.4)" (N 7,675; 21.03%). Lee et al. reported that 10 diterpenoids isolated from RSM displaced the binding of [ 3 H] flunitrazepam with gamma-aminobutyric acid-benzodiazepine receptors. Among these compounds, miltirone had the highest binding activity (IC 50 0.3 µM) and was orally active in animal models as a tranquillizer (Lee et al., 1991). Fang et al. reported that administration of an ether extract (600 mg/kg) of RSM significantly decreased sleep latency and increased sleep duration in mice treated with pentobarbital (Fang et al., 2010). Tanshinone IIA showed neuroprotective activity against cerebral ischemia via the inhibition of macrophage migration inhibitory factor (Chen et al., 2012). The third most frequent diagnosis in patients prescribed RSM in Taiwan was "Symptoms involving the cardiovascular system (ICD-9-CM: 785)." It was in the 1930s that modern chemical and medical methods were first used for studying the active constituents of RSM and its pharmacological actions. RSM exerts its effects on the cardiovascular system and is used mainly to treat coronary artery disease. TanshinoneⅡA is the major compound that yields the most notable results in coronary artery disease treatment (Zheng et al., 2015). According to modern pharmacological studies, RSM and its main components exert protective effects on the cardiovascular and cerebrovascular systems (Yuan et al., 2015). RSM was used to treat "Essential hypertension (ICD-9-CM: 401)" and "Disorders of lipid metabolism (ICD-9-CM: 272; Table 3)." It is the most frequently prescribed single herb for hypertension. Multiple pharmacological effects of RSM on the cardiovascular system have been reported, including anti-hypertensive effects (Kang et al., 2002). RSM is the most commonly prescribed single CHP for atrial fibrillation treatment in Taiwan. Patients with atrial fibrillation using TCM have a reduced risk of new-onset ischemic stroke (Hung et al., 2016a). RSM exerts anti-atherosclerotic, anticardiac hypertrophic, anti-oxidant, and anti-arrhythmic effects by promoting blood circulation, and it provides relief from blood stasis . It improves microcirculation, causes coronary vasodilatation, suppresses the formation of thromboxane, inhibits platelet adhesion and aggregation, and protects against myocardial ischemia (Cheng, 2005). RSM protects endothelial cells, exerts anti-inflammatory effects, reduces lipid peroxidation, and prevents calcium overload. RSM has been frequently used to treat hyperlipidemia, chronic hepatitis, hepatic fibrosis, chronic renal failure, and gynecological conditions, including dysmenorrhea, amenorrhea, and lochioschesis, without any serious adverse effects (Peng et al., 2001;Chen et al., 2013). This explains why RSM is commonly prescribed by TCM doctors for the treatment of "Symptoms involving the cardiovascular system." RSM was also prescribed to patients in Taiwan with "Chronic liver disease and cirrhosis (ICD-9-CM: 571)" and "Functional digestive disorders, not elsewhere classified (ICD-9-CM: 564; Table 2)." Recent studies have shown that RSM and its main constituents demonstrate protective effects in models of liver injury induced by carbon tetrachloride, D-galactosamine, acetaminophen, and alcohol administration. Several active ingredients that are effective in protecting liver microsomes, hepatocytes, and erythrocytes against oxidative damage have Frontiers in Pharmacology | www.frontiersin.org July 2021 | Volume 12 | Article 719519 been identified (Peng et al., 2001). Some animal studies have shown that RSM exerts protective effects on the intestinal mucosa of rats with severe acute pancreatitis and obstructive jaundice, perhaps by inhibiting apoptosis and downregulating the expression of nuclear factor-κB at the protein level (Kim et al., 2005;Zhang et al., 2010). Previous studies have shown that RSM can exert protective effects on the intestinal mucosa in animal models of acute pancreatitis (Kim et al., 2005) and obstructive jaundice by reducing the translocation of intestinal bacteria in patients (Chen et al., 2013). This study showed that RSM was prescribed for patients with "Secondary malignant neoplasm of other specified sites (ICD-9-CM: 198)," "Malignant neoplasm of trachea, bronchus, and lungs (ICD-9-CM: 162)," and "Malignant neoplasm of other and illdefined sites (ICD-9-CM: 195; Table 4). Tanshinone IIA is a derivative of phenanthrene-quinone that shows cytotoxic activity against many human carcinoma cell lines, induces differentiation and apoptosis and inhibits invasion and metastasis of cancer cells. It is thought to function by inhibiting DNA synthesis and proliferation in cancer cells, regulating the expression of genes associated with proliferation, differentiation, and apoptosis, inhibiting the telomerase activity of cancer cells, and altering the expression of cell surface antigens (Yuan et al., 2003;Shan et al., 2009). The specific components responsible for the antitumor activity of RSM may be a group of diterpenoids with furano-1,2-or furano-1,4-naphthoquinone skeletons (tanshinones); however, the mechanism of action of these compounds is yet to be elucidated. In addition, salvinal, isolated from RSM, has been shown to inhibit proliferation and induce apoptosis of various human cancer cells (Peng et al., 2001). Therefore, salvinal may be useful for the treatment of human cancers, particularly in patients with drug resistance (Chang et al., 2004). Salvinal exhibits no crossresistance with current microtubule inhibitors, including vinca alkaloids and taxanes, in cells overexpressing P-glycoprotein or multidrug resistance-related proteins (Chang et al., 2004). Moreover, the anti-tumor effects of tanshinone IIA include enhancing the apoptosis of advanced cervix carcinoma CaSki cells (Shan et al., 2009), inhibiting the invasion and metastasis of human colon carcinoma cells (Pan et al., 2013), suppressing angiogenesis in human colorectal cancer (Zhou et al., 2012), downregulating the expression of epidermal growth factor receptors in hepatocellular carcinoma cells (Zhai et al., 2009), and reducing Stat3 expression in breast cancer stem cells (Lin et al., 2013;Chen, 2014). The aqueous extracts of RSM have long been used in TCM for the treatment of cancer. Cryptotanshinone has been reported to be a potential anticancer agent (Peng et al., 2001). RSM may inhibit cancer cell proliferation through its antioxidant activity against tumor initiation and induce apoptosis or autophagy through reactive oxygen species generation, which inhibits tumor progression, development, and metastasis (Hung et al., 2016b). In the present study, RSM was prescribed for patients with "Dementia (ICD-9-CM: 290)," "Disorders of lipid metabolism (ICD-9-CM: 272)," and "Essential hypertension (ICD-9-CM: 401; Table 4)." The three combined diagnosis groups indicated that dementia had some relationship with circulation and metabolic diseases. Some animal studies have strongly indicated that compound Danshen tablet could help ameliorate learning and memory deficits in mice by rescuing the imbalance between the levels of cytokines and neurotrophins (Teng et al., 2014). In addition to RSM, compound Danshen tablet contains Panax notoginseng and borneol. Several active ingredients of compound Danshen tablet have been shown to exert therapeutic effects in animal models of Alzheimer's disease (Yin et al., 2008;Lee et al., 2013). Furthermore, clinical trials have indicated that RSM is an effective agent for the prevention and treatment of Alzheimer's disease (Chang et al., 2004). The most common diagnosis for RSM users was "Disorders of menstruation and other abnormal bleeding from the female genital tract", followed by "General symptoms" and "Symptoms involving the cardiovascular system". (Table 2). These conditions are associated with stress and lifestyle, explaining why most of the RSM users are in the business profession and live in higher urbanized areas. The reason why lower income is associated with RSM use is unclear. Nevertheless, we have presented the data here and hope that sparks discussion. A previous study, which enrolled 2,380 participants from the Stanford Five-City Project in the United States, examined the independent contribution of education, income, and occupation to a set of cardiovascular disease risk factors, such as cigarette smoking, high blood pressure, and high cholesterol (Winkleby et al., 1992). Education was the only factor that was significantly associated with the cardiovascular risk factors; higher education results in better socioeconomic status and thus predicts good health. Although lower income may not directly correlate with lower education status, it may still contribute to the results. The Prospective Urban Rural Epidemiology (PURE) study demonstrates that there are a greater number of cases of cardiovascular diseases and stroke in urban areas than in rural areas (Teo et al., 2013). The urban areas have more business sectors than the rural areas. Farmers living in rural areas exhibit lower cases of cardiovascular diseases than other occupations living in urban areas and thereby had less need to take Danshen in this study. We also note that those with a higher monthly income are more capable of affording medical expenses and visiting the TCM outpatient clinic for Danshen medication. This study has several limitations. First, a definitive conclusion could not be made about the effectiveness of RSM. The data were collected retrospectively from databases and the choice of herbal medicine was at the discretion of Chinese Medicine practitioners who were trained how to apply RSM in clinical practice. The prescription of RSM is largely dependent on the subjective judgment of TCM doctors. Their educational background, years of experience, and site of practice were not available from the NHIRD. Second, although TCM physicians in Taiwan use ICD-9-CM for diagnosis in clinical practice, no reliable and suitable disease-coding system exists for TCM (Yang et al., 2015b). Consistent with the therapeutic principles of RSM, several variations were observed in the prescription characteristics used in this study. Therefore, the development of a TCM diagnostic coding system in the future will considerably improve TCM research. Third, the data file used in this study was provided by Taiwan NHRI, which had been authorized by the Ministry of Health and Welfare to manage the claims data of the NHI. The latest, updated version of the database by the NHRI is not currently available. Finally, the NHI only provided reimbursement for finished herbal products prescribed by TCM physicians. This did not include decoctions and other herbal preparations provided by pharmacies, and this may have resulted in the underestimation of diagnoses and frequency of RSM utilization. However, this underestimation is likely to be small because most Chinese herbal medicines are reimbursed (Yang et al., 2015a). CONCLUSION This is the first large-scale investigation of RSM usage in patients with different conditions. Using claims data from the NHIRD, a nationwide, population-based, cross-sectional, descriptive study was conducted to investigate the conditions and characteristics of RSM use. The largest group of patients prescribed RSM had menstrual disorders, followed by general symptoms, and cardiovascular symptoms. The most common combination of diseases for which RSM was prescribed included Menopausal disorders and General symptoms. These results indicate that RSM is a key medicinal herb for the treatment of gynecological disorders, including menstrual disorders, female genital pain, and menopausal disorders. Women aged 35-49 years, living in the urban areas were the main RSM users. For clinical purposes, RSM was most frequently combined with Yan-Hu-Suo and JWXYS. Further research is needed to strengthen the available clinical evidence regarding the efficacy and safety of RSM, either alone or in combination with other CHPs, in these conditions. DATA AVAILABILITY STATEMENT The dataset used in this study is held by the Taiwan Ministry of Health and Welfare (MOHW). The Ministry of Health and Welfare must approve our application to access this data. Any researcher interested in accessing this dataset can submit an application form to the Ministry of Health and Welfare requesting access. Please contact the staff of MOHW (Email: stcarolwu@mohw.gov.tw) for further assistance. Taiwan Ministry of Health and Welfare Address: No.488, Sec. 6, Zhongxiao E. Rd., Nangang Dist., Taipei City 115, Taiwan (R.O.C.). Phone: +886-2-8590-6848. Project (BM10701010021), MOST Clinical Trial Consortium for Stroke (MOST 109-2321-B-039-002), Tseng-Lien Lin Foundation, Taichung, Taiwan, and Katsuzo and Kiyo Aoshima Memorial Funds, Japan. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
v3-fos-license
2018-04-03T05:18:37.217Z
2014-07-08T00:00:00.000
5248419
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/emi/2014/978795.pdf", "pdf_hash": "ea61909e3c6f7276bc5b834066eedf29fe2c212a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41674", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cca0810da347984699b01b443d44c7545095fcd0", "year": 2014 }
pes2o/s2orc
Emergency Sonography Aids Diagnostic Accuracy of Torso Injuries: A Study in a Resource Limited Setting Introduction. Clinical evaluation of patients with torso trauma is often a diagnostic challenge. Extended focused assessment with sonography for trauma (EFAST) is an emergency ultrasound scan that adds to the evaluation of intrathoracic abdominal and pericardial cavities done in FAST (focused assessment with sonography for trauma). Objective. This study compares EFAST (the index test) with the routine standard of care (SoC) investigations (the standard reference test) for torso trauma injuries. Methods. A cross-sectional descriptive study was conducted over a 3-month period. Eligible patients underwent EFAST scanning and the SoC assessment. The diagnostic accuracy of EFAST was calculated using sensitivity and specificity scores. Results. We recruited 197 patients; the M : F ratio was 5 : 1, with mean age of 27 years (SD 11). The sensitivity of EFAST was 100%, the specificity was 97%, the PPV was 87%, and the NPV was 100%. It took 5 minutes on average to complete an EFAST scan. 168 (85%) patients were EFAST-scanned. Most patients (82) (48%) were discharged on the same day of hospitalization, while 7 (4%) were still at the hospital after two weeks. The mortality rate was 18 (9%). Conclusion. EFAST is a reliable method of diagnosing torso injuries in a resource limited context. Introduction Evaluation of patients with torso trauma is often a diagnostic challenge for emergency physicians and trauma surgeons. Uncontrolled hemorrhage is responsible for over 50% of trauma related deaths [1][2][3]. Significant bleeding into the peritoneal, pleural, or pericardial spaces may occur without obvious signs [4,5]. Physical findings may be unreliable because of decreased patient consciousness, neurologic deficit, medication, or other associated injuries like fractures of lower chest ribs, contusion, and abrasions of the abdominal wall. All these call for a need to confirm internal injury by imaging as uncontrolled haemorrhage because torso trauma is one of the major causes of early trauma deaths [6]. However, in many rich trauma centers, bedside ultrasound is the initial imaging modality used to evaluate patients with blunt and penetrating torso trauma. This cannot be said for many countries in sub-Saharan Africa. Yet bedside ultrasound is cheap, noninvasive, and fast leading to early intervention and hence potential reduction in mortality [5,7]. The purpose of this study therefore was to determine the diagnostic accuracy of EFAST when compared with the routine standard of care (SoC) assessment for torso trauma injuries in a resource limited setting. Design. A cross-sectional analytical study was conducted. Study Setting. The study was carried out at the A&E unit of Mulago National Referral and Teaching Hospital in Kampala city. The unit is fully fledged with medical and surgical wings, two operating rooms, with X-ray and ultrasound facilities, a high dependence unit (with three beds), and a thirty-bed holding emergency ward. Adjacent to it are blood bank, hematology, microbiology, and clinical chemistry laboratories. The unit sees on average 30 patients with internal torso per month. These included patients with torso abrasions and/or bruises, unconsciousness, multiple injuries, alcohol intoxication with trauma, long bone fractures, pelvic fractures, and spine injuries. Those with penetrating injuries and burns with no other trauma injuries were excluded. Sampling. Patients were recruited consecutively on arrival at the A&E unit; they were triaged and transferred to the appropriate examination rooms for further assessment according to the ATLS protocol. Patients in need of operative management were immediately taken to the operating room while those assigned to the nonoperative management plan were admitted to the 24 h holding emergency ward for observation. Patients consented after they were resuscitated; a special request was obtained from the IRB for a waiver of consent for the unconscious and those without relatives. History and examination findings were recorded on precoded questionnaires. Patients suspected to have torso injuries underwent EFAST using SonoSite TITAN portable ultrasound machine with a transducer frequency ranging from 3.5 to 5 MHz. Images were saved to be reread by a consultant radiologist as a quality control measure. Patients then underwent secondary survey followed by other routine investigations and management according to the hospital's SoC; this consisted of a CXR and abdominal ultrasound scanning preceded by a physical examination and history taking. The CXR was taken by a radiographer and read by an experienced radiologist. The abdominal ultrasound scans were taken by experienced operators. The history taking and physical examinations were performed by an intern doctor and validated by residents and a consultant surgeon in succession. Examination of the Right Upper Quadrant. The transducer was placed in the midaxillary line between the 11th and 12th ribs, applying coronal scan with the probe (cranially or caudally and medially or laterally) to obtain an optimal image of Morison's pouch and looking for free fluid in it. Findings were recorded on coded questionnaires. Assessing for Fluid in Right Pleural Cavity. The probe was moved slightly upwards from position of Morison's pouch to look for fluid in the right pleural cavity. Findings were recorded on a coded questionnaire. Examination of the Left Upper Quadrant. The probe was placed along the left posterior axillary line between the 8th and 11th ribs. If rib shadows were seen then the probe was placed between the ribs (along the intercostal space) to avoid poor acoustic window. Findings were recorded on coded questionnaires. Assessing for Fluid in Left Pleural Cavity. The probe from splenorenal view was angled with beam direction more cephalad for well visualisation of the spleen and diaphragm and the fluid above the diaphragm was observed. Findings were recorded on coded questionnaires. Assessing for Free Fluid in Pelvis. The probe was placed in transverse position 2 cm above pubis (for transverse imaging of the bladder) and then turned longitudinally (for longitudinal imaging of the bladder). Findings were recorded on coded questionnaires. 2.3.6. Assessing for Pneumothorax. The probe was placed perpendicular to the ribs in the anterior chest region intercostal spaces 2-3 along the midclavicular line. This was usually done at 3rd-4th intercostal spaces. When visualization was inadequate, the probe was rotated on 90 degrees, placing it directly in the intercostal space along the ribs. Absence of "lung sliding" was a sign of pneumothorax. Findings were recorded on coded questionnaires. Assessing for Fluid in Pericardium. Subxiphoid pericardial window which has been considered the gold standard for the diagnosis of pericardial effusion was used. In the positive examination there was anechoic space (collection of fluid) between the heart and the pericardium. Findings were recorded on a coded questionnaire. Study Variables. These included age, sex, tribe, and occupation. In addition, signs indicating torso trauma were abrasions and bruises. Unconsciousness, multiple injuries, alcohol intoxication, long bone fractures, pelvic fractures, and spine injuries were the other variables. Data Collection, Management, and Analysis. Data collected using pretested questionnaires were double entered, coded, and cleaned using EpiData version 5.3.2 software package. Stored data were exported to STATA version 12 for analysis. Categorical and numerical variables were summarized using portions, frequency tables, pie charts, and bar charts. Continuous data were summarized into means, medians, and standard deviations. Usefulness of EFAST was determined by calculating sensitivity and specificity of EFAST diagnoses after comparison with the SoC diagnoses made. Ethical Considerations. Written informed consent was obtained from the participants and permission was obtained from IRB for the patients unable to consent because of their unconscious state and having no available next-of-kin. Results A total of 197 patients were clinically suspected to have torso injury. These patients were subjected to EFAST and the SoC. EFAST scanning took on average 5 minutes to complete for each patient. The scanning was done during resuscitation just after the primary survey. The study was done from January to March 2012 (see Figure 1). The Baseline Characteristics. The male : female ratio was 5 : 1. The age ranged from 2 to 83 years with mean age 26.9 years (SD of 11). There were 29 (14%) victims whose age was less than 18 years, 158 (80%) were aged between 19 and 45 years, and 10 were >45 years old. 78 (40%) patients arrived at unit before an hour had elapsed from time of injury. 168 (85%) patients were EFAST scanned within an hour from admission while most patients (105) (54%) underwent SoC investigations after an hour of stay at the hospital. Most patients (82) (48%) were discharged on the same day of hospitalization. By the end of the first week 151 (77%) patients had been discharged while 13 (7%) patients were discharged on the second week and 7 (4%) were still at the hospital by the end of the two weeks. 18 (9%) of the participants had died (see Table 1). In one hour the emergency room doctor could establish a diagnosis on internal injury in only 17% of patients. RTC was the main reason for hospitalization. Circumstances of Injury. The cause of most of the injuries was due to motor traffic crashes (127) (65%). Assault was the second largest cause of injuries affecting 61 patients (31%), followed by falls affecting 6 individuals (3%), while other causes contributed to 3 injuries (2%). Of the motor traffic crashes 53 were the most affected (27%) followed by motor bike victims (41) (21%). Taxis and private cars contributed to the remaining percentage of the victims. Of the motorbike injuries 31 (62%) were the motorbike riders while 19 (38%) were passengers. External Injuries. There were 23 (12%) participants who did not present with any external injuries. In most instances, participants had more than one presentation of external injuries. All presentations were recognized at the time of admission and the sites of external injuries (see Table 2). The chest was the most affected region in 130 (66%) followed by the head. Management Strategies. Overall 14 (7%) patients were managed operatively while 83 (93%) were managed nonoperatively. There were 4 (2%) patients with hemoperitoneum and 8 (4%) with hemothorax. One patient had a grade four splenic injury and one other a liver laceration in combination with gut perforation. One patient who failed nonoperative management with a hemoperitoneum died en route to the operating room and the eight patients with hemothoraces had tube thoracostomies done. Discussion We set out to determine the utility of emergency ultrasound scanning for the torso trauma patients in a resource limited context. We found that EFAST had high sensitivity and specificity in a low-resourced, high patient turnover environment. There were more males than females injured, similar to a previous study at the unit [8]. The mean age was 27 years attesting to the fact that the youths are the most vulnerable [8,9]. A fair number of patients (78) (40%) arrived at the A&E within the first hour of injury. This applies more to patients whose district of origin was within 25 km of radius of the hospital. Most of the patients who came from more peripheral districts arrived at the hospital by the second day. Most of the patients underwent EFAST within 30 minutes of admission; 159 (81%) had undergone EFAST within one hour. Most of the SoC diagnoses did not differ from those reached by bedside ultrasound (EFAST). Several studies support bedside ultrasound as it is fast, affordable, and noninvasive [10][11][12]. The concept of the golden hour in trauma stresses the fact that most trauma patients can be saved if attended immediately after injury. This can be helped if an ultrasound machine is stationed within the resuscitation room and performed immediately when internal injury is suspected during initial assessment [5,7]. EFAST improves care by providing a quicker method of assessment, less reliance of radiology staff (who we have few of), and by its ready availability within the resuscitation room. EFAST and Standard of Care 4.1.1. Hemoperitoneum. In patients who underwent both EFAST and SoC which included full abdominal ultrasound scanning, 23 (12%) were positive at EFAST and 21 were positive at full abdominal ultrasound scanning. The two patients who were considered positive at EFAST but negative when subjected to SoC investigations were among the nonoperatively managed group; therefore EFAST findings could not be validated. Two patients who were considered positive when subjected to SoC investigations underwent surgery and a specific diagnosis of solid organ injury was made. In these two patients the radiological diagnosis was ruptured spleen; however, a ruptured liver and urinary bladder were missed in the first patient and a ruptured spleen was missed in the second patient. Several studies point out that ultrasound is very sensitive in fluid detection but poorly sensitive in solid organ injuries detection [5]. In one patient both EFAST and abdominal ultrasound scan diagnoses were normal but on the second day the patient was still complaining of severe abdominal pain and the abdomen was distended. An exploratory laparotomy was done and the patient had a rupture small gut. Ultrasound is shown by several studies to be sensitive for detection of abdominal fluid accumulation, which is assumed to be blood in acute trauma but has little role in diagnosis of gut perforation in which it can describe fluid level in nonacute perforation [5,6]. In two other patients, ruptured small gut was missed by both EFAST and full abdominal ultrasound scans only to be diagnosed intraoperatively on failure of conservative management. Hemothorax. Of the patients who underwent both EFAST and standard of care, chest X-ray in this case, for thoracic complaints 10 (5%) were positive on EFAST and 9 (5%) on chest X-ray. The entire nine patients were managed according to the standard of care. The patient who was positive on EFAST alone was managed nonoperatively because the hemothorax was small. Ultrasound is more sensitive in the detection of hemothorax than a chest X-ray [5,13]. Ultrasound can detect as little as less than 20 milliliters while an ordinary chest X-ray can pick a hemothorax of 150 mLs on maneuvering the position of the patient when taking the X-ray picture [5]. Pneumothorax. Five (3%) and 4 patients (2%) were positive for pneumothorax on EFAST and SoC, respectively. (In SoC a chest X-ray was used.) All patients were managed operatively. Thoracic ultrasound is shown by several studies to be highly sensitive in detecting hemopneumothorax. Its sensitivity has been demonstrated to be up to 100% and specificity of 99.7% [5]. This is reflected in this study in which one patient diagnosis was missed in the initial chest X-ray (standard of care). Hemopericardium. There was one patient who had a hemopericardium detected as a small collection on ultrasound but was missed on chest X-ray. This may not have been blood but could be an effusion. The patient was managed conservatively. As it is shown by many other previous studies, ultrasound is very sensitive in fluid detection [10]. EFAST sensitivity and specificity were 100% and 97%, respectively. Positive and negative predictive values were 87% and 100%, respectively. The findings are similar to several other studies [5,[14][15][16]. 4.1.6. Patients' Deposition at Two Weeks. Patients were followed up for two weeks. For patients who had been discharged a phone call was made at the end of the second week to get general information on how they are and if they got back to any hospital for a reason of worsening of the discharge diagnosis and in that way it was possible to establish if they were alive. Mortality rate was 9% and this was relatively high; in study by Gyayi in Uganda the mortality was 7% and other similar studies state 4% to 5% [17][18][19][20]. Study Limitations. No postmortems were done for the fatalities and for the patients managed nonoperatively; there could not be intraoperative findings to validate EFAST findings. The low frequency transducer that was used could have missed some positive findings. Emergency US scanning performed may be operator dependent, though some studies show that FAST can be reliably done by a radiologist and nonradiologist [21]. Conclusion EFAST, an emergency ultrasound scanning technique, is highly sensitive and specific assessment modality for torso injury. It should be adopted for routine use in low resource contexts.
v3-fos-license
2024-05-11T15:19:57.614Z
2024-01-01T00:00:00.000
269702890
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "6e272f419e40e427ce8c046cd4c9e13ced924bc8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41676", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "ff1ef8a3357e492ddf0dec96c63ef360baca158d", "year": 2024 }
pes2o/s2orc
Assessment of heart-substructures auto-contouring accuracy for application in heart-sparing radiotherapy for lung cancer Abstract Objectives We validated an auto-contouring algorithm for heart substructures in lung cancer patients, aiming to establish its accuracy and reliability for radiotherapy (RT) planning. We focus on contouring an amalgamated set of subregions in the base of the heart considered to be a new organ at risk, the cardiac avoidance area (CAA), to enable maximum dose limit implementation in lung RT planning. Methods The study validates a deep-learning model specifically adapted for auto-contouring the CAA (which includes the right atrium, aortic valve root, and proximal segments of the left and right coronary arteries). Geometric, dosimetric, quantitative, and qualitative validation measures are reported. Comparison with manual contours, including assessment of interobserver variability, and robustness testing over 198 cases are also conducted. Results Geometric validation shows that auto-contouring performance lies within the expected range of manual observer variability despite being slightly poorer than the average of manual observers (mean surface distance for CAA of 1.6 vs 1.2 mm, dice similarity coefficient of 0.86 vs 0.88). Dosimetric validation demonstrates consistency between plans optimized using auto-contours and manual contours. Robustness testing confirms acceptable contours in all cases, with 80% rated as “Good” and the remaining 20% as “Useful.” Conclusions The auto-contouring algorithm for heart substructures in lung cancer patients demonstrates acceptable and comparable performance to human observers. Advances in knowledge Accurate and reliable auto-contouring results for the CAA facilitate the implementation of a maximum dose limit to this region in lung RT planning, which has now been introduced in the routine setting at our institution. Introduction Radiotherapy (RT) is a key component of curative-intent treatment for lung cancer.Recent studies have shown increasing evidence that excess mortality is related to irradiation of the heart. 1,24][5][6] To monitor and reduce RT dose to the base of heart region, it is first necessary to contour the relevant substructures from the planning CT scan.However, contouring of heart substructures from RTplanning CT scans is challenging and time-consuming, which may limit the ability to reduce dose to this region.Heart substructures are often poorly visualized in RT-planning CT images due to respiration and cardiac motion, complex anatomy, and low contrast between structures.The development of methods to facilitate contouring of heart substructures, particularly the base of heart region, is therefore of clinical significance. Automatic contouring methods have been investigated to address this issue.Multiple studies have reported autocontouring systems to identify heart substructures from CT images, using a variety of approaches including: atlas methods based on deformable image registration, 7,8 deep-learning methods based on convolutional neural networks (CNN), [9][10][11][12][13][14] or hybrid methods. 15Studies vary in which substructures are included in the model.Most include the whole heart and the 4 main heart chambers (right and left atria and ventricles).Often great vessels are included (aorta, pulmonary artery, superior vena cava, inferior vena cava).Less frequently included are the coronary arteries (left main coronary artery, left anterior descending artery, left circumflex artery, and right coronary artery).The long, thin nature of the coronary artery structures and their often poor visualization on CT imaging make them particularly challenging to contour automatically.One solution to this problem is to define an expanded "high-risk" zone containing sensitive coronary arteries while being easier to contour when informed by surrogate landmarks. 16n this article, we report the validation of a deep-learningbased model for auto-contouring of a high-risk region at the base of the heart (subsequently referred to as cardiac avoidance area or CAA).This region was identified using imagebased data mining techniques by McWilliam et al 4 and encompasses the right atrium, aortic valve root, and proximal sections of the left and right coronary arteries.Rather than including all heart substructures in the auto-contouring model, we only contour those substructures which are to be included in the CAA.This reflects a difference in purpose to many studies where a large number of substructures are contoured individually for research into correlations between substructure dose and clinical outcomes.In the present study, we propose to incorporate a novel maximum dose limit to the CAA into the optimization of lung RT plans delivered at our institution, that is, to introduce the CAA as a new organ at risk (OAR) for lung RT planning.The aim of this validation study is to demonstrate auto-contouring performance suitable for this clinical implementation. Following recommendations on validation and implementation of automated and artificial intelligence-based tools in RT, 17,18 we used a range of validation metrics, including geometric and dosimetric, quantitative and qualitative.We also compared the quality of auto-contours to the expected range of manual contours due to interobserver variation.In contrast to previous studies, we assess this interobserver variation using both geometric and more clinically relevant dosimetric measures.Finally, we reviewed auto-contouring performance over 198 test cases to demonstrate the robustness of the model on a wide range of images. Cardiac avoidance area The CAA defined at our institution is based on the findings of McWilliam et al 4 who identified, using image-based data mining techniques, a region at the base of the heart with excess radiosensitivity.The maximum dose to the base of the heart was found to be the most important factor associated with survival.After discussion with cardiologists and incorporating considerations of cardiac physiology, 19 the final CAA includes the following structures located at the base of the heart: right atrium, the proximal portions of the left and right coronary arteries, and the aortic valve root.The aortic valve root structure includes the aortic valve and aortic root superiorly to the level of the coronary ostia or superior extent of the right atrium.The left and right coronary arteries are contoured with a standard width of 7 mm and only the first 2 cm from the coronary ostia is included.These short and wide artery structures are intended to include the portions coincident with the previously identified base of heart region and are also more easily contoured than if the full artery were included.Figure 1 shows a 3D render of the constituent substructures which are included in the CAA. Model validation An auto-contouring model for the CAA was developed based on a 3D CNN.The development and validation of the model were split into 3 phases (see Figure 2).An initial model development and training phase was followed by a preclinical model commissioning phase.The purpose of the commissioning process was to demonstrate acceptable performance for use of the auto-contouring model as part of a clinical workflow, using a variety of validation methods.Finally, clinical phase validation took place after implementation to verify performance in the clinical setting. CT image data Auto-contouring methods were trained and assessed using CT image data from 331 patients treated with RT for lung cancer at a single centre (73 cases used for model training, 258 cases used for subsequent validation).Patient characteristics of cases used for model training and validation are shown in Table 1.Training and validation data were selected retrospectively from images acquired on 4 CT scanners (all Philips Brilliance Big Bore) using a 4D THORAX protocol (3-mm slice width, 512 � 512 matrix, 56-to 70-cm FOV, 120 kVp, 16 mm � 1.5 mm collimation, 0.059 pitch, 0.44-s rotation time, 47 mGy CTDIvol).For each case, the 4DCT average images only were used for model training and validation, and all patients received IV contrast unless clinically contraindicated.Each case consisted of multiple CT slice images covering the lungs of the patient, and the term "image" is subsequently used to refer to the full 3D scan, rather than individual slice images. Auto-contouring with 3D CNN An existing fast, efficient 3D CNN was adapted for cardiac substructures.The model is based on that previously used to segment abdominal organs, 20,21 based on a 3D UNet design 22 with added residual connections, 23 and further modifications to reduce the model size and computational efficiency (see Supplementary Material for further details).The model uses the Pytorch deep-learning framework (v1.8.1).An initial step automatically identifies the heart location in the CT 24 and crops the image around it to a size of 128 � 128 � 64 voxels. Preprocessing is applied to the cropped image to remove high-density metal artefacts and to normalize the images into 2 separate contrast channels.Metal artefacts are identified using a threshold to create a mask of the high-density voxels. The high-density mask is then dilated and smoothed before being used to replace the included voxels with a soft-tissue density value.For image normalization, channel 1 uses a wider window to view general structures in the thoracic region (W1600, L-200), while channel 2 uses a narrower window to increase the contrast of the soft-tissue heart substructures (W200, L65).The segmentation CNN runs on the cropped image to infer masks representing each cardiac substructure, before these are postprocessed to remove small, disconnected regions via a morphological opening operation. 25Finally, the substructures are combined into the final CAA contour and padded back to the original image matrix size. The deep-learning model was trained using 73 cases with manual contours for right atrium, aortic valve root, left coronary artery, and right coronary artery.These cases are assigned as either training (80%) or validation (20%), and the CNN was trained using a weighted multiclass soft Dice loss function 26 and the Adam optimizer 27 (loss function plots are shown in the Supplementary Material).The training process takes approximately 2 h using an NVidia GeForce RTX 3090 GPU.Once trained, the model is compiled using the Open Neural Network Exchange (ONNX), 28 decreasing its size.During inference the CNN produces contours using the CPU alone, facilitating easy deployment of the model without specialist hardware.The final workflow uses the DICOM CT series as input and writes the segmented CAA and substructures contours as a DICOM RT-Structure. 29 Geometric validation We evaluated the accuracy of our auto-contouring model against 3 sets of manual contours for 10 cases that were not included in the model training data (separate from the 73 cases used during model training).The 3 sets of manual contours were drawn by different observers (2 radiation oncologists and 1 physicist, all trained in contouring of the heart substructures involved). We created a consensus contour from the 3 manual contours using the simultaneous truth and performance level estimation (STAPLE) method, 30 which estimates a probabilistic ground truth and a performance level for each observer based on their agreement with others.We then compared all contours (manual and automatic) to the STAPLE contour using 3 metrics: mean surface distance (MSD), dice similarity coefficient (DSC), and Hausdorff distance 95th percentile (HD95). 31MSD measures the average distance between 2 contour surfaces in 3 dimensions, where S and S 0 are 2 contour surfaces consisting of points p and p 0 , and d(p, S 0 ) is the minimum distance between point p (on surface S) and surface S 0 .MSD of zero indicates perfect agreement between contours, while higher values indicate increasing difference.DSC measures the spatial overlap between 2 segmentations, A and B, and is defined as where \ is the intersection.DSC of one indicates complete overlap between segmentations, while zero indicates no overlap.HD95 measures the maximum distance between the contours, excluding the 5% of points which are furthest apart.This is a variation of the Hausdorff distance (HD) which measures the maximum distance between the contours at any point.HD95 is commonly used for the evaluation of segmentation accuracy in RT. 18 It is a more robust measure than HD as it has lower sensitivity to statistical outlier points. 32HD95 of zero indicates the best agreement between contours, while higher values indicate increasing difference. To investigate whether the auto-contours were consistent with the performance of the manual contours we classified each contour as either automatic or human generated.We then used R version 4.3.0 33and lme4 34 to perform a linear mixed effects analysis of the relationship between MSD and contour type (automatic/human).The model included contour type as a fixed effect and patient as a random effect (since we expect random variations in contour quality between patients).P-values for the null hypothesis (that contour type does not affect MSD value) were generated from likelihood ratio tests of the full model against the same model without contour type as a fixed effect.This analysis was also repeated for DSC. Dosimetric validation We further validated our auto-contouring model by generating lung RT plans using the manual and automatic contours for the CAA.This was done to assess the clinical impact of using automatic contours for RT planning.All plans were created with the Philips Pinnacle treatment planning system (v16) using a dual arc VMAT technique to deliver 55 Gy in 20 fractions to the Planning Target Volume (PTV).A maximum dose objective of 19.5 Gy was set for the CAA during optimization (see Supplementary Material for further details of RT planning objectives).To ensure consistent plan quality, all plans were generated automatically using a scripted process without manual intervention. For all plans, we calculated the CAA dose metrics using the STAPLE contour, regardless of which contour was used for plan optimization, as this was the best estimate of the true CAA contour.For each dose metric, the consistency of the plan generated using automatic contours with the plans generated using manual contours was tested using the same linear mixed effects analysis as used for the geometric validation. Robustness testing and qualitative scoring Robustness of the contouring tool was investigated by applying to a larger test set of 198 CT scans from the same centre.The aim was to investigate the consistency of the proposed contouring algorithm and to reveal any rarer cases of contouring failure which may not be apparent with the smaller test set used for the geometric and dosimetric validations.Auto-contours for all 198 cases were reviewed visually on all slices to check that contours were produced and identify any gross failures (ie, no overlap with actual CAA).Additionally, 45 auto-contours (randomly selected from the 198 cases used for robustness testing) were rated by an expert clinician (J.K.) using a 1-5 Likert scale, as shown in Table 2. Assessment of clinician-edited contours Following the clinical implementation of the CAA autocontouring software, the quality of auto-contours for the first 50 patients was assessed.All auto-contours were checked by a clinician and edited where necessary to produce a CAA contour that was suitable to use for plan optimization.The clinician-accepted contours were then compared to the original auto-contours in terms of MSD and DSC to evaluate the degree of editing that was required. Results Auto-contouring with 3D CNN Figure 3 illustrates example CAA contours for the 10 validation patients produced by the 3D CNN (red), along with manual contours from 3 different observers (blue) and the consensus manual contour (yellow).Variation between the different manual contours, the automatic contour, and the consensus contour can be observed.Contour variation tends to be smaller at the border between right atrium and lung tissues (indicated by arrows A in Figure 3) and greater around the left coronary artery (arrows B in Figure 3).CNN inference produced the cardiac substructure masks in approximately 1 s per case (running on an Intel Xeon 6134 CPU), although full processing took 30-60 s per case (including data in/output, cropping, and postprocessing). Geometric validation Figure 4 shows the results of geometric validation metrics MSD, DSC, and HD95 for manual contours (blue dots) and auto-contours (orange crosses) compared to the STAPLE consensus contour. Table 3 shows the results of the geometric validation metrics for auto-contours and manual contours.For each metric (MSD, DSC, and HD95), the auto-contour mean value over the 10 patients is shown, along with the mean value for manual contours (averaged first over the 3 observers for each patient and then over the 10 patients).Also shown is the standard deviation between the manual observers (calculated separately for each patient and then averaged over patients), and the P-value for the metric not being dependent on contour type (automatic/human) calculated from the mixed effects model. For each patient, a range of values for the manual contours is observed.For DSC, the auto-contour value is generally around the lower end of this range (or upper end of the range for MSD and HD95).This indicates that the auto-contour performance is slightly poorer than the manual observers on average, although it is consistent with the range of performance expected from manual observers.This observation is reflected by the mean values in Table 3 where the autocontour mean DSC (0.86) is slightly poorer than the manual Good/minor edits only 5 Very good/no changes needed contour value (0.88).The mixed effects model analysis indicates that DSC and HD95 are not significantly correlated with contour type (automatic vs manual), although for MSD the effect was significant. Dosimetric validation Figure 5 shows dose metrics for lung RT plans with CAA sparing generated using automatic, manual, and STAPLE CAA contours (additional dose metrics are shown in the Supplementary Material).The blue dots illustrate the range of values achieved for plans optimized using the different manual contours for the CAA.The orange cross indicates the plan optimized using the CAA auto-contour, which in the majority of cases is within the range of the manual plans (blue dots).Table 4 shows results of the dosimetric validation metrics compared between plans optimized using auto-contours and plans optimized using manual contours.For each dose metric, the auto-contour mean value over the 10 patients is shown, along with the mean value for manual contour plans (averaged first over the 3 observers for each patient, and then over the 10 patients).Also shown is the standard deviation between the manual observer plans (calculated separately for each patient and then averaged over patients), which indicates how much variation in each dose metric occurs due to interobserver variation in the manual CAA contours.and the final column shows the P-value for the metric not being dependent on the contour type (automatic/manual) calculated from the mixed effects model.Only the CAA maximum dose was significantly dependent on contour type.Significance testing was done using a significance level of a ¼ 0.05, without correction for multiple comparisons.Note that correcting for multiple comparisons via either the Bonferroni or Holm methods 35 would result in none of the dose metrics being significant.Dose-volume histograms of the CAA for plans optimized using automatic and manual CAA contours are shown in the Supplementary Material. Robustness testing Auto-contouring runs successfully with no gross failures for all 198 tested cases.Of 45 auto-contours rated by an expert clinician, 80% were rated as "Good" (minimal editing required), with the remaining 20% of cases rated as "Useful." Assessment of clinician-edited contours Figure 6 shows histograms of the MSD and DSC values between original auto-contours and clinician-accepted contours for 50 lung cancer cases which were planned with sparing of the CAA.The mean MSD value is 1.2 mm, the mean DSC value is 0.93, and the mean HD95 value is 4.6 mm. Discussion The deep-learning auto-contouring model for the CAA was validated using a range of metrics.Our results show that the auto-contours were acceptable, and consistent with the performance of human observers for most metrics.Auto-contouring performance for the CAA cannot be directly compared to other published heart substructure auto-contouring models, as these do not include the combined CAA as structure.However, the mean DSC of 0.86 for the combined CAA structure used in our model is consistent with the performance of other reported models for the main heart chambers, which are in the range 0.75-87, 15 0.78, 7 0.70-0.79, 80.89, 9 0.81-0.93, 10 0.87-0.92, 120.76-0.88, 13and 0.82-0.88. 14It is worth noting that the assessment of clinician-edited contours after clinical implementation showed better results than the initial testing (DSC of 0.93 vs 0.86, and MSD of 1.2 vs 1.6 mm).This is to be expected as the clinician-edited contours in the clinical implementation use the auto-contour as a starting point whereas in the initial testing, clinicians contoured independent of the auto-contour.This illustrates that results in the clinical setting are not always the same as those from the commissioning (preimplementation) phase.It is therefore important to repeat validation tests after implementation of an Artificial Intelligence (AI)-based tool, to ensure that acceptable performance is maintained. 36e also validated our auto-contouring model against the expected interobserver variation between manual observers, finding a mean DSC of 0.88 between manual observers for the CAA.This is similar to the value for auto-contours generated using our model (of 0.86).DSC values for all contours were calculated in comparison to the STAPLE consensus contour generated from the 3 manual observer contours.This will tend to reduce the apparent error for the manual contours as the STAPLE contour will be biased towards them.Although the structure studied here is not directly comparable, the result is similar to the degree of interobserver variation seen for the larger heart substructures by Milo et al 37 where median DSC 0.78-0.96was found for whole heart and cardiac chambers, and by Zhou et al, 8 where DSC 0.84-0.90was reported for main heart chambers. We analysed dose statistics of lung RT plans optimized to reduce dose to the CAA, comparing plans based on autocontours to those based on manual contours.A distinguishing feature of this study is the consideration of interobserver variation in the dose-based validation of the contouring model.We found that plans based on auto-contours were consistent with plans optimized using manual contours for all dose metrics except for CAA maximum dose, which tended to be higher for plans optimized using automatic CAA contours.However, the difference for this metric became insignificant after adjusting for multiple comparisons.It is not surprising that this dose metric was the most sensitive to small differences in the CAA contour, since the plans were specifically optimized to reduce dose to this structure.The resulting plans often have a steep gradient in the dose distribution adjacent to the CAA.Small differences in the CAA contours can therefore lead to a large change in the maximum dose to that structure.However, the 1-cc max dose to CAA metric was found to be consistent between automatic and manual contours, indicating that the largest differences between the plans were limited to a small volume. Our auto-contouring robustness test with 198 cases and qualitative scoring with 45 cases showed that the software produces contours free of gross errors in all cases and that 80% of cases were rated as "Good," only requiring minimal editing for clinical use.The remaining 20% of contours were rated as "Useful," with no contours rated as "Not useful."Other studies to report qualitative scoring of heart substructure auto-contours include Garrett Fernandes et al 12 where heart chamber contours were rated clinically acceptable in 96%-100% of 99 cases; Walls et al 11 rated 20 patients, finding all heart chamber auto-contours to be either Good or Acceptable; Haq et al 10 rated 25 cases, finding that 85% were acceptable for clinical use with no adjustments.In a large cohort of 1429 cases, Bruns et al 9 found 83%-96% acceptable auto-contour quality for the main heart chambers, although only selected slices were reviewed rather than the full contour. Following the validation studies reported in this article, the CAA auto-contouring model has been implemented in the routine clinical setting.Since April 2023, curative-intent, non-SABR lung cancer patients at our institution are planned with optimization objectives to reduce dose to the CAA, which represents a new OAR.The use of the auto-contouring tool has facilitated the implementation of CAA-sparing RT planning in all patients with stage 1-3 Non-Small Cell Lung Cancer (NSCLC) treated with non-SABR curative-intent RT at our institution.Indeed, the majority of clinical oncologists on the lung team were not familiar with the contouring of this region and have limited time due to large workload.Deep-learning-based auto-contours of the CAA are generated and imported to the RT treatment planning system.The contours are then checked (and if necessary edited) by a radiation oncologist before being used for plan optimization. 38Any changes in clinical outcomes resulting from limiting dose to the CAA will be analysed prospectively using rapid-learning methodology and real-world data as part of the RAPID-RT project. 39 Conclusion In this study, we have developed a CNN-based tool that can automatically contour the CAA for lung RT patients.We have validated the auto-contouring performance using various methods, such as geometric and dosimetric measures, and compared it with the interobserver variation of manual contours.Our results show that the auto-contours and the plans optimized using them are consistent with the manual contours and the plans optimized using them.In addition, the auto-contouring tool is robust, and performance monitoring since clinical implementation has shown that only a small degree of manual contour editing is required.The development of the auto-contouring system facilitates the clinical implementation of dose reduction to the CAA, which may improve patient outcomes by reducing cardiac toxicity. Figure 1 . Figure 1.3D render of cardiac avoidance area showing right atrium (red), aortic valve root (green), proximal portions of left coronary artery (blue), and right coronary artery (orange).Whole heart outline is shown in yellow. Figure 2 . Figure 2. Flowchart of tests and data used at different stages of validation for heart substructure auto-contouring model. Figure 3 . Figure 3.Comparison of CAA auto-contours (red), manual contours (blue) and STAPLE consensus contours (yellow).Arrows A indicate border between right atrium and lung tissue with generally good agreement between manual and automatic contours.Arrows B indicate left coronary artery with greater variation between contours.CAA ¼ cardiac avoidance area; STAPLE ¼ simultaneous truth and performance level estimation. Figure 4 . Figure 4. Comparison of MSD and DSC between manual and automatic contours relative to consensus STAPLE contour for 10 patients.DSC ¼ dice similarity coefficient; MSD ¼ mean surface distance; STAPLE ¼ simultaneous truth and performance level estimation. Figure 5 . Figure 5.Comparison of dose statistics between plans optimized using automatic and manual CAA contours.CAA ¼ cardiac avoidance area. Figure 6 . Figure 6.MSD and DSC values for between original auto-contour and clinician-edited contour for 50 cases.DSC ¼ dice similarity coefficient; MSD ¼ mean surface distance. Table 1 . Patient characteristics of cases used for training and validation of auto-contouring model. Table 2 . Qualitative scoring scale used to rate auto-contour quality. Table 3 . Comparison of auto vs manual contours geometric validation metrics. Table 4 . Comparison of auto vs manual contours for dosimetric validation metrics.
v3-fos-license
2022-04-03T15:59:32.512Z
2022-03-30T00:00:00.000
247915164
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2615/12/7/869/pdf", "pdf_hash": "dbb9ccb9e93ddddd309a2eb4016ea72df3674ce6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41677", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "dc34fea5eb4e3415b2fc489d98dd65e1f5612084", "year": 2022 }
pes2o/s2orc
Freezing Protocol Optimization for Iberian Red Deer (Cervus elaphus hispanicus) Epididymal Sperm under Field Conditions Simple Summary The germplasm banks of wild species, such as Iberian red deer, are not widespread, mainly due to the difficulties of collecting and cryopreserving reproductive cells. Optimal freezing protocols under field conditions could be a breakthrough for these species. In this study, epididymal sperm was evaluated using two methods of sperm storage during refrigeration (tube and straw); four equilibration periods (0, 30, 60, and 120 min); and four methods of freezing (cryopreservation in liquid nitrogen vapors in a tank (control) or box, freezing in dry ice, or freezing over a metallic plate). The results showed that samples stored in straws during refrigeration produced less apoptotic spermatozoa and more viable spermatozoa withactive mitochondria. A long equilibration period (120 min) yielded a higher percentage of acrosomal integrity. Moreover, there was no difference in sperm quality between freezing in liquid nitrogen vapors in a tank or box. However, a worse quality was obtained when the samples were cryopreserved in dry ice or over a metallic plate compared to the control. Abstract Creating germplasm banks of wild species, such as the Iberian red Deer (Cervus elaphus hispanicus) can be challenging. One of the main difficulties is the obtention and cryopreservation of good-quality reproductive cells when the spermatozoa are obtained from epididymides after death. To avoid a loss of seminal quality during transport, developing alternative methods for cooling and freezing sperm samples under field conditions is necessary. The objective of this study was to evaluate the effects of different durations of equilibrium and different techniques of cooling and freezing on Iberian red deer epididymal sperm quality after thawing to optimize the processing conditions in this species. Three experiments were carried out: (I) evaluation of refrigeration in straws or tubes of 15 mL; (II) study of equilibration period (0, 30, 60, or 120 min); and (III) comparison of four freezing techniques (liquid nitrogen vapor in a tank (C), liquid nitrogen vapor in a polystyrene box (B), dry ice (DY), and placing straws on a solid metallic plate floating on the surface of liquid nitrogen (MP)). For all experiments, sperm motility and kinematic parameters, acrosomal integrity, sperm viability, mitochondrial membrane potential, and DNA integrity were evaluated after thawing. All statistical analyses were performed by GLM-ANOVA analysis. Samples refrigerated in straws showed higher values (p ≤ 0.05) for mitochondrial activity and lower values (p ≤ 0.05) for apoptotic cells. Moreover, the acrosome integrity showed significant differences (p ≤ 0.05) between 0 and 120 min, but not between 30 and 60 min, of equilibration. Finally, no significant differences were found between freezing in liquid nitrogen vapors in a tank or in a box, although there was a low quality after thawing when the samples were cryopreserved in dry ice or by placing straws on a solid metallic plate floating on the surface of liquid nitrogen. In conclusion, under field conditions, it would be possible to refrigerate the sperm samples by storing them in straws with a 120 min equilibration period and freezing them in liquid nitrogen vapors in a tank or box. Introduction In the last few decades, progress has been made in establishing Genome Resource Banks (GRBs), which ensure long-term genetic preservation and variability and improve the reproductive efficiency of wild species and domestic animals [1][2][3][4][5][6][7][8][9]. This progress is mainly due to the development of new techniques of sperm cryopreservation, using specifical freezing media or freezing rates, which altogether have improved sperm cryosurvival [10,11]. However, sperm cryopreservation is a complex process that involves many factors, both cellular (e.g., shape, size, membrane lipid composition, sperm source) and dependent on the freezing protocol (e.g., cooling and freezing rates or use of cryoprotectants), that may entail some risks of sperm injury related to osmotic, biochemical, and physicochemical intracellular changes, which hinder the storage and preservation of sperm reproductive potential [9,[11][12][13][14]. Some studies have described the influence of several factors, from the type of species [11] to methodological aspects, such as the use of permeable and nonpermeable cryoprotective agents [6,15], cooling and thawing techniques [16][17][18][19], and different methods of semen collection [19,20], on post-thaw sperm viability. Thus, these studies highlight that, to ensure the survival and viability of the spermatozoa, all factors involved in semen cryopreservation must be considered. Lately, there has been significant interest in using artificial reproductive technologies (ARTs) for the handling of Iberian red deer (Cervus elaphus hispanicus; Hilzheimer, 1909) populations, not only because of their livestock and recreational hunting interest [6], but also because they could be used as a model for other related endangered subspecies. Notably, in wild deer populations within fenced hunting estates, inbreeding has led to genetic isolation and has resulted in detrimental effects on some components of female fitness and male reproductive ability [21]. In this situation, the cryopreservation of Iberian red deer epididymal spermatozoa has offered the possibility of progressing in establishing genetic resource banks for this species, since they can be obtained from certain types of hunting [6,7,[22][23][24][25][26][27][28]. Some studies have demonstrated that it is possible to obtain viable sperm from the epididymis 24 h after death if the testis is stored at room temperature [29], or up to 4 days if held at 5 • C [26]. The viability of epididymal spermatozoa has been surprisingly high even after freezing and thawing [27]. However, although progress has been made in the cryopreservation of deer epididymis semen, improvements are still needed to maximize its quality after freeze-thaw protocols. This is because most of the protocols used in the cryopreservation of the epididymis sperm have been adapted from those used in ejaculate sperm, despite the physiological differences that these present [6]. Therefore, all semen cryopreservation factors, parameters, and phases are required to assure sperm viability and to develop specific protocols in Iberian red deer epididymal spermatozoa. The transport containers, diluents, storage techniques during refrigeration, cryoprotectant agent (CPA), cooling rate, equilibration period, and freezing and thawing protocols are the key to success in terms of sperm survival. It has been seen that packing techniques during freezing could affect post-thaw sperm quality in different species [30][31][32], and that the surface-to-volume ratio determined by the method used seems to be decisive [33]. However, there are no studies that have evaluated the effects of different storage methods during the refrigeration of sperm samples on their quality. On the other hand, the effects of the equilibration time have been studied in other species [34,35], with a variety of results depending on the type and concentration of the cryoprotectant [36]. The equilibration period encourages sperm membrane stability, mainly in the acrosomal membrane, due to the adaptation of membrane lipids to cooler temperatures [35,37]. Thus, facilitating the movement of the penetrating cryoprotectants through the membrane and allowing water outlet to the outside of the cell minimizes the ice formation during the freeze-thaw process and diminishes possible damage [38]. Several studies have been conducted on semen from different species, including sheep [39], goats [40], and cattle [41], to identify the optimum equilibrium period. Although in Iberian red deer sperm the usual equilibration period used in freeze-thaw protocols is 120 min [6], there is a lack of studies about the effects of different equilibration periods on the post-thaw epididymal sperm quality in this species. Apart from this, the effects of freezing methods on sperm quality have been widely studied in various species, with different results among them [42]. Some studies on ungulates have shown the effects of storage temperature on epididymal and ejaculated semen [43,44] and the effects of the cooling rate on the freezability of Iberian red deer sperm [6]. Most Iberian red deer samples are obtained from locations far from the laboratory and may decrease the sperm quality before processing. A possible solution could be to perform the freezing process in the field. However, sperm freezing requires large apparatus that cannot be transported to the sample collection site, so the use of protocols adapted to field conditions would be a possible solution to avoid decreasing the quality of the samples. Bearing all this in mind, the overall objective of this work was to develop a specific protocol for the cryopreservation of Iberian red deer epididymal spermatozoa that improves the outcome and can be used under field conditions. In this regard, three different experiments were developed: (1) to determine the effects of two storage techniques during the cooling phase, (2) to explore the action of four equilibration periods, and (3) to evaluate possible alternatives to the conventional method of cryopreservation in liquid nitrogen vapors. Testes and Sperm Collection Testes were collected from 25 Iberian red deer for each experiment. All the animals were adults and were hunted legally during the rutting season in Castilla-La Mancha (Spain) within the harvest plan of the game reserve according to the Spanish Harvest Regulation, Law 2/93 of Castilla-La Mancha, which conforms to European Union regulations. The testes were collected 6 h after slaughter and transported to the laboratory in plastic bags at room temperature (approximately 15 • C). Testes were then removed from the scrotal sac, and caudal epididymides were separated and transferred into a petri dish. Sperm Processing and Fresh Quality Evaluation Sperm were collected from distal epididymis according to the method described by Soler et al. [46] and diluted in an exact volume (0.5 mL) of freezing medium fraction A (Tris-citrate-fructose and clarified egg yolk 20%). Subsequently, sperm concentration was assisted with a Neubauer chamber (Marienfeld, Lauda-Königshofen, Germany). In addition, sperm motility was assessed for each sample as described below. Only ejaculates with sperm motility higher than 60% were cryopreserved. Cryopreservation of Epidydimal Spermatozoa Briefly, sperm was again diluted in a two-step procedure: first, semen was diluted to 400 × 10 6 spermatozoa/mL with fraction A, and then fraction B was added until achieving a final concentration of spermatozoa (200 × 10 6 spermatozoa/mL) and glycerol (6%). Both steps took place at room temperature. After dilution, the study was subdivided into three experiments with the following experimental design (Figure 1). Experiment 1 evaluated different storage techniques during sample refrigeration. Experiment 2 examined the effect of different equilibration times, and Experiment 3 determined the effect of several freezing techniques on post-thaw sperm quality. Sperm Processing and Fresh Quality Evaluation Sperm were collected from distal epididymis according to the method described by Soler et al. [46] and diluted in an exact volume (0.5 mL) of freezing medium fraction A (Tris-citrate-fructose and clarified egg yolk 20%). Subsequently, sperm concentration was assisted with a Neubauer chamber (Marienfeld, Lauda-Königshofen, Germany). In addition, sperm motility was assessed for each sample as described below. Only ejaculates with sperm motility higher than 60% were cryopreserved. Cryopreservation of Epidydimal Spermatozoa Briefly, sperm was again diluted in a two-step procedure: first, semen was diluted to 400 × 10 6 spermatozoa/mL with fraction A, and then fraction B was added until achieving a final concentration of spermatozoa (200 × 10 6 spermatozoa/mL) and glycerol (6%). Both steps took place at room temperature. After dilution, the study was subdivided into three experiments with the following experimental design (Figure 1). Experiment 1 evaluated different storage techniques during sample refrigeration. Experiment 2 examined the effect of different equilibration times, and Experiment 3 determined the effect of several freezing techniques on post-thaw sperm quality. To determine the effects of two different storage techniques, two aliquots of each deer were treated and diluted as described above. Once diluted, one aliquot was refrigerated in a 15 ml collector tube immersed in water at 5 °C, and the other was refrigerated in a To determine the effects of two different storage techniques, two aliquots of each deer were treated and diluted as described above. Once diluted, one aliquot was refrigerated in a 15 mL collector tube immersed in water at 5 • C, and the other was refrigerated in a straw immersed in water at 5 • C. Both samples were refrigerated for 10 min until the temperature of 5 • C was reached. Then, samples were kept at this temperature for 120 min and frozen in liquid nitrogen vapor in a tank. Note that the samples refrigerated in 15 mL collector tubes, before being frozen, were also loaded in 0.25 mL plastic straws. Experiment 2: Effect of Different Equilibrium Times on Post-Thaw Sperm Quality of Iberian Red Deer Spermatozoa To evaluate several equilibration times and their effects on post-thaw spermatozoa survival, four aliquots of every stag were treated and diluted as described above. After dilution, samples were refrigerated at 5 • C for 10 min in a tube and maintained at this temperature for 0, 30, 60, or 120 min. Then, they were loaded into 0.25 mL plastic straws and frozen in liquid nitrogen vapor in a tank. Experiment 3: Effect of Freezing Techniques on Post-Thaw Sperm Quality of Iberian Red Deer Epididymal Spermatozoa To search for the most suitable method under field conditions apart from a liquid nitrogen tank, three alternative freezing techniques were evaluated. In this experiment, four aliquots of every stag were used, and they were diluted as described above. Once diluted, samples were cooled to 5 • C for 10 min in a tube and held for 120 min at this temperature to equilibrate them. Following the equilibration period, aliquots were frozen with different methods. Four groups were conducted within this experiment: control (C), box (B), dry ice (DY), and metallic plate (MP). The C group was frozen by the standard methodology (liquid nitrogen vapor in a tank) [6]. The B group was frozen by placing the straws in a polystyrene box with liquid nitrogen, first at 4 centimeters above the liquid nitrogen for 10 min, and then dipped in the liquid nitrogen. The DY group was placed into a polystyrene box with dry ice inside it and maintained for 10 min directly on the dry ice; once this period had passed, straws were submerged into the liquid nitrogen. Finally, the MP group consisted of placing straws on a solid metallic plate floating on the surface of liquid nitrogen for 10 min; after this period, straws were submerged into the liquid nitrogen. Thawing and Evaluation of Post-Thaw Spermatozoa Quality The thawing procedure was accomplished by placing the straws in a 37 • C water bath for 20 sec. Sperm motility, acrosome status, and plasma membrane integrity were assessed for each sample to determine sperm quality in vitro. Sperm Motility Assays All samples were evaluated for sperm motility by a computer-assisted sperm analyzer (CASA). A prewarmed (37 • C) Makler counting chamber (10 µm depth; Sefi Medical Instruments, Haifa, Israel) was loaded with 5 µL of the sample. The CASA system consisted of a trinocular optical phase-contrast microscope (Nikon Eclipse 80i, Nikon Instruments Inc, Tokyo, Japan) and a Basler A302fs digital camera (Basler Vision Technologies, Ahrensburg, Germany). The camera was connected to a computer by an IEEE 1394 interface. Images were captured and analyzed using the Sperm Class Analyzer (SCA 2002) software (Microptic S.L., Barcelona, Spain) adjusted to ram spermatozoa. The sample was examined with a 10× objective lens (negative phase contrast) in a microscope with a heated plate, and five areas were recorded. The following parameters were assessed: percentage of motile spermatozoa (SM); curvilinear velocity (VCL, µm s −1 ); straight-line velocity (VSL, µm s −1 ); average path velocity (VAP, µm s −1 ); linearity index (LIN, %); and amplitude of lateral head displacement (ALH, µm). Acrosomal Integrity Percentage Acrosomal integrity was assessed by phase-contrast microscopy (Nikon Eclipse 80i, Nikon Instruments Inc, Tokyo, Japan) with a 400× objective lens. For this purpose, 5 µL of the diluted semen was fixed in 2% glutaraldehyde in 0.165 M cacodylate/HCl buffer at pH 7.3 (1:20 dilution). The percentage of spermatozoa with intact acrosomes (%NAR) was calculated by counting those with an intact apical rim. At least 100 cells of each sample were evaluated. Assessment of Sperm Viability and Mitochondrial Activity Analyses of sperm viability, as well as mitochondrial activity, were performed using YO-PRO-1/propidium iodide (IP), and Mitotracker Deep Red (MT)/YO-PRO-1, respectively [47]. Briefly, the samples were diluted to a concentration of 10 6 spermatozoa/mL in BGM-3 solution and stained using the fluorophores. Sperm viability was assessed with 0.1 µM YO-PRO-1 (Invitrogen, Barcelona, Spain) and 10 µM PI, whereas mitochondrial membrane potential was assessed with 0.1 µM YO-PRO-1 and 0.1 µM MT. The tubes were left to rest for 20 min in the dark and then analyzed by flow cytometry. The samples were run through a flow cytometer (Cytomics FC500; Becton Dickinson, San José, California) furnished with a 488 nm Argon-Ion laser (excitation for YO-PRO-1 and PI), and a 635 nm He-Ne laser (excitation for MT). The FSC (forward-scattered light) and SSC (side-scattered light) signals were used to gate out debris (non-sperm events). Fluorescence from YO-PRO-1, PI, and MT was read using a 525/25BP, 615DSP, and 675/40BP filter, respectively. All the parameters were read using logarithmic amplification. Ten thousand spermatozoa of each sample were recorded. Flow cytometer data were analyzed by WEASEL v2.6 (WEHI; Melbourne, Victoria, Australia) software using the following guidelines: the YO-PRO-1-/PI-and YO-PRO-1+/PI-sperm subpopulations were considered viable spermatozoa with an intact membrane and apoptotic spermatozoa, respectively, and the YO-PRO-1-/MT+ sperm subpopulation was considered as live, with active mitochondria. Sperm Chromatin Assessment Samples were diluted in TNE buffer (0.01 M Tris-HCl, 0.15 M NaCl, 1 mM EDTA, pH 7.4) to a final sperm concentration of 2 × 10 6 spermatozoa/mL and immediately frozen in liquid nitrogen. Samples were stored at −80 • C until use. Chromatin stability was assessed following the SCSA ® (Sperm Chromatin Structure Assay; SCSA diagnostics, Brookings, SD, USA). This test measures the percentage of sperm with fragmented DNA and the degree of DNA damage [48]. For analysis by flow cytometry, samples were thawed in a 37 • C water bath, and 200 µL of the sperm sample was submitted to a DNA denaturation step by adding 0.4 mL of an acid-detergent solution (0.17% Triton X-100, 0.15 M NaCl, 0.08 N HCl, pH 1.4). After 30 seconds, samples were mixed with 1.2 mL of acridine orange (AO) solution (0.1 M citric acid, 0.2 M Na2HPO4, 1 mM EDTA, 0.15 M NaCl, 6 µg/mL AO, pH 6.0) and analyzed by flow cytometry after two and a half minutes. AO is a metachromatic fluorochrome that shifts from green (dsDNA, double-strand) to red (ssDNA, single-strand) depending on the degree of DNA denaturation. Samples were run through Cytomics FC500, as described above, using the 488 nm laser and 530/28BP filter for green fluorescence and 620SP filter for red fluorescence. The DNA fragmentation index (% DFItotal) was measured for every sample to show the amount of red emission produced by a sample regarding total fluorescence emitted. Statistical Analysis All statistical analyses were performed using SPSS for Windows, version 22.0 (SYSTAT Software Inc., Evanston, IL, USA). A generalized linear means (GLM-ANOVA) model that included the method of refrigeration (tube or straw), equilibration time (0, 30, 60, or 120 min), and freezing methods (in liquid nitrogen vapors in a tank or box, in dry ice, or on a metallic plate) as fixed factors and the sperm quality parameters as an independent variable was constructed to study the significant differences in post-thaw sperm quality parameters. Comparison of means was performed using the Bonferroni test. A p-value of ≤ 0.05 was considered statistically significant. Results The first experiment was performed to determine the effects of the storage methods during refrigeration on sperm quality. We studied two different methods of storage: in collector tubes or straws. As reported, no significant differences were found in NAR Animals 2022, 12, 869 7 of 14 and viability (nonapoptotic) when samples were refrigerated in straws instead of 15 mL collector tubes (Figure 2). Moreover, the DNA integrity was similar between the two forms of refrigeration. Nonetheless, apoptotic spermatozoa and viable spermatozoa with active mitochondria showed significant differences (p ≤ 0.05) between both treatments. Results The first experiment was performed to determine the effects of the storage methods during refrigeration on sperm quality. We studied two different methods of storage: in collector tubes or straws. As reported, no significant differences were found in NAR and viability (nonapoptotic) when samples were refrigerated in straws instead of 15 ml collector tubes (Figure 2). Moreover, the DNA integrity was similar between the two forms of refrigeration. Nonetheless, apoptotic spermatozoa and viable spermatozoa with active mitochondria showed significant differences (p ≤ 0.05) between both treatments. Regarding the kinematic sperm parameters, there were no differences for VCL, VSL, VAP, LIN, or ALH when samples were refrigerated in a tube or straw (Table 1). Table 1. Effects of storage methods (tubes of 15 mL or 0.25 mL straws) during refrigeration of Iberian red deer epididymal spermatozoa on kinematics parameters. Data represented as mean ± SEM. VCL-curvilinear velocity (μm/s); VSL-rectilinear velocity (μm/s); VAP-velocity for the corrected trajectory (μm/s); LIN-linearity (%); ALH-lateral head displacement (μm). Same letter within columns indicate not significant differences (p ≥ 0.05). In the second experiment, the same sperm parameters were studied according to different equilibration periods at 5 °C (0, 30, 60, and 120 min). No significant differences were found in SM, viability (nonapoptotic), apoptotic spermatozoa, viable spermatozoa with active mitochondria, and DNA integrity for 0, 30, 60, and 120 min ( Figure 3). Nevertheless, NAR showed significant differences (p ≤ 0.05) between 0 and 120 min, with higher values for the longest refrigeration time. Moreover, 120 min of refrigeration yielded values of VSL and VAP higher in relation to 30 or 60 min of refrigeration or bypassing this process ( Table 2). Regarding the kinematic sperm parameters, there were no differences for VCL, VSL, VAP, LIN, or ALH when samples were refrigerated in a tube or straw (Table 1). Table 1. Effects of storage methods (tubes of 15 mL or 0.25 mL straws) during refrigeration of Iberian red deer epididymal spermatozoa on kinematics parameters. Data represented as mean ± SEM. VCL-curvilinear velocity (µm/s); VSL-rectilinear velocity (µm/s); VAP-velocity for the corrected trajectory (µm/s); LIN-linearity (%); ALH-lateral head displacement (µm). Same letter within columns indicate not significant differences (p ≥ 0.05). In the second experiment, the same sperm parameters were studied according to different equilibration periods at 5 • C (0, 30, 60, and 120 min). No significant differences were found in SM, viability (nonapoptotic), apoptotic spermatozoa, viable spermatozoa with active mitochondria, and DNA integrity for 0, 30, 60, and 120 min ( Figure 3). Nevertheless, NAR showed significant differences (p ≤ 0.05) between 0 and 120 min, with higher values for the longest refrigeration time. Moreover, 120 min of refrigeration yielded values of VSL and VAP higher in relation to 30 or 60 min of refrigeration or bypassing this process ( Table 2). Finally, different freezing techniques were assessed to evaluate their effect on po thaw sperm quality. No significant differences were found in SM and NAR for C and (Figure 4). Nevertheless, SM and NAR showed significant differences (p ≤ 0.05) betwe these treatments and the remaining two groups (DY and MP), with the lowest values MP. Concerning viability, apoptotic spermatozoa, and viable spermatozoa with active m tochondria, no significant differences were found between C, B, and DY, although th treatments were different (p ≤ 0.05) to MP, with the lowest values for this freezing meth ( Figure 4). However, the DNA integrity was not affected by the freezing procedure, w similar values between treatments. Table 2. Effects of different equilibration times (0, 30, 60, and 120 minutes) on sperm motility CASA parameters of Iberian red deer epididymal spermatozoa. Data represented as mean ± SEM. VCLcurvilinear velocity (µm/seg); VSL-rectilinear velocity (µm/seg); VAP-velocity for the corrected trajectory (µm/seg); LIN-linearity (%); ALH-lateral head displacement (µm). Different letters within columns indicate significant differences (p ≤ 0.05). Finally, different freezing techniques were assessed to evaluate their effect on postthaw sperm quality. No significant differences were found in SM and NAR for C and B (Figure 4). Nevertheless, SM and NAR showed significant differences (p ≤ 0.05) between these treatments and the remaining two groups (DY and MP), with the lowest values for MP. Concerning viability, apoptotic spermatozoa, and viable spermatozoa with active mitochondria, no significant differences were found between C, B, and DY, although these treatments were different (p ≤ 0.05) to MP, with the lowest values for this freezing method ( Figure 4). However, the DNA integrity was not affected by the freezing procedure, with similar values between treatments. Equilibration In the same way as for the other sperm parameters, VCL, VSL, VAP, and ALH showed lower (p ≤ 0.05) values for the MP freezing method compared to C, B, and DY (Table 3). In the same way as for the other sperm parameters, VCL, VSL, VAP, and AL showed lower (p ≤ 0.05) values for the MP freezing method compared to C, B, and D (Table 3). Table 3. Effects of different freezing techniques on kinematics parameters of Iberian red deer e didymal spermatozoa. C (control-liquid nitrogen vapor in a tank), B (box-polystyrene box w liquid nitrogen inside), DY (dry ice-polystyrene box with dry ice inside), and MP (metallic plat solid metallic plate floating on the surface of liquid nitrogen). Data represented as mean ± SE VCL-curvilinear velocity (μm/seg); VSL-rectilinear velocity (μm/seg); VAP-velocity for the c rected trajectory (μm/seg); LIN-linearity (%); ALH-lateral head displacement (μm). Different ters within columns indicate significant differences (p ≤ 0.05). Discussion Given that a suitable freezing protocol for Iberian red deer epididymal spermatoz under field conditions has not yet been published, this study focused for the first time evaluating the effects of different storage methods during refrigeration, different equ bration times, and different freezing techniques on the quality of epididymal spermatoz after thawing. All this was carried out with the aim of improving and simplifying freezing conditions of epididymal spermatozoa in this species under field conditions. Table 3. Effects of different freezing techniques on kinematics parameters of Iberian red deer epididymal spermatozoa. C (control-liquid nitrogen vapor in a tank), B (box-polystyrene box with liquid nitrogen inside), DY (dry ice-polystyrene box with dry ice inside), and MP (metallic plate-solid metallic plate floating on the surface of liquid nitrogen). Data represented as mean ± SEM. VCLcurvilinear velocity (µm/seg); VSL-rectilinear velocity (µm/seg); VAP-velocity for the corrected trajectory (µm/seg); LIN-linearity (%); ALH-lateral head displacement (µm). Different letters within columns indicate significant differences (p ≤ 0.05). Discussion Given that a suitable freezing protocol for Iberian red deer epididymal spermatozoa under field conditions has not yet been published, this study focused for the first time on evaluating the effects of different storage methods during refrigeration, different equilibration times, and different freezing techniques on the quality of epididymal spermatozoa after thawing. All this was carried out with the aim of improving and simplifying the freezing conditions of epididymal spermatozoa in this species under field conditions. In this way, Experiment 1 showed that refrigeration in straws seems to be associated with higher levels of viable spermatozoa with active mitochondria and a significantly lower percentage of apoptotic cells in post-thaw samples. These findings align with previous studies in other species, in which a lower volume of the samples packed before refrigeration improved the motility parameters and fertility [31]. A lower sample volume could result in more contact with the refrigerant and with all the cryoprotective components (e.g., egg yolk, EDTA, citrate), allowing the homogeneous and rapid cooling of the sample [6,49]. A correct cooling rate enables cells to adapt to temperature changes, reducing the injuries originated by cold shock. It is known that spermatozoa are very sensitive to a rapid reduction in temperature from 25 to 5 • C, which induces stress in the membranes related to a reorganization of phospholipids and proteins [6,18,50,51]. This alters their functional status and permeability, affecting the function of ion channels, producing reactive oxygen species (ROS), and altering the potential of the mitochondrial membrane [11,52]. Therefore, cooling the samples in straws instead of tubes could minimize damage to the sperm during freezing and would also greatly facilitate the development of a freezing protocol for epididymal sperm samples in the field, where it is more complex to use cooling chambers at 5 • C to pack the straws after the cooling process. Besides this, in Experiment 2, different equilibration times were studied to simplify procedures. During the equilibration time, the period following the cooling stage, sperm membranes are stabilized at 5 • C to minimize cold injuries. This is possible because this period allows water to exit and facilitates the entrance of permeable cryoprotectants (e.g., glycerol), which exert their cryoprotective effect on the sperm [35,[51][52][53]. In several studies of other species, an association has previously been found between an average equilibration time (2-4 h) and the preservation of the motility and integrity of sperm membranes [52][53][54][55][56]. However, there are no published studies regarding different durations of this stage and their effect on the thawing of Iberian red deer epididymal spermatozoa, with the most frequently used equilibrium time being 120 min [6,26]. In the present study, there were differences in the NAR percentage between samples with no equilibration time and those kept at 5 • C for 120 min, but not between the middle length periods (30 and 60 min) and the 120 min group. Thus, even though there were no significant differences between the middle groups, we could observe that all these parameters tended to be better when an equilibration time was applied. Moreover, the longer equilibration time showed higher values for VSL and VAP. This shows that in Iberian red deer, a long equilibration time may be essential to ensure epididymal sperm viability, regardless of its duration. Similarly, other research studies in Gyr bulls [57] obtained better post-thawing sperm quality when implementing an equilibration period, no matter its length. This apparent cryoresistance of the epididymal spermatozoa may be due to their morphological and biophysical properties in their lipid membrane compared to ejaculated spermatozoa, as well as their smaller size [1,8,58]. The latter would imply a higher osmotic tolerance by the epididymal spermatozoa, due to a lower water volume, facilitating the flow of cryoprotectants and water during shorter equilibrium times [8]. Finally, the last factor we assessed in this study was the freezing methodology, to facilitate its development under field conditions. To date, no study has evaluated different freezing techniques for Iberian red deer epididymal sperm, for which the methods used are adapted from sheep and goats [59]. In these species, the conventional techniques widely used are based on nitrogen vapor freezing, in which samples are suspended in nitrogen vapor for approximately 15 min (with the cooling speed being on average 16-25 • C/min) and then rapidly immersed in liquid nitrogen at −196 • C for storage [6,60]. Studies in some wild ruminant species such as mouflon and fallow deer have shown promising results in semen thawing using this methodology compared to other ultrarapid freezing techniques [8,61]. Herein, we compared three alternative freezing methods (B, DY, and MP) using the same semen sample packaging conditions (plastic straws) as the conventional nitrogen vapor freezing in a tank. The results showed that only the metallic plate method (MP) obtained significantly worse outcomes in post-thaw semen quality (see Figure 4). This lower sperm quality in the MP procedure may be due to the high conductivity of the metal, which would determine a faster loss of cold compared to other materials. In addition, the fact that the straws were placed directly on the metal plate would have led to a high cooling rate on the side in direct contact with the plate and a much lower cooling rate on the top side, which was not in direct contact. Therefore, no homogenous freezing would occur [62]. In contrast, the polystyrene-box-frozen samples gave comparable results without significant differences to the control group (C), which could make them a suitable alternative to tank-freezing with liquid nitrogen. Some studies have reported similar results between dry ice and nitrogen vapor liquid [63]. It should be said that, despite the recent interest in ultrafast freezing techniques, also known as vitrification techniques, as they are easy to perform and low-cost, they have not been considered in the deer species. This is not only because of the significant effect that high concentrations of cryoprotectants have on the postthaw quality of the sperm [8,62,64], but also because they are not very profitable. In most cases, the vitrification techniques involve freezing a small volume of semen, which requires more advanced reproductive techniques such as ICSI, which needs only one spermatozoon per oocyte to be fertilized. On the other hand, in most wild ruminant species, such as the Iberian red deer, artificial insemination (AI) and in vitro fertilization (IVF) are the most frequently used reproductive techniques, which require a larger volume of semen sample for their execution. For this reason, we thought that freezing in a polystyrene box would present some advantages, as it requires a material that can be handled in field conditions, which facilitates the preservation of sperm samples collected outside the laboratory for species where vitrification is not an option or needs to be optimized. Conclusions Typically, the collection and processing of Iberian red deer sperm samples are carried out on captured males under field conditions. In this work, we introduce variations to the standard technique to develop a specific protocol that is both easy to develop under field conditions and preserves sample quality. The best conditions, according to our findings, were: (1) storing samples in straws prior to refrigeration, as it minimizes cell damage due to temperature fluctuations; (2) considering a long equilibration time for the samples; (3) freezing samples in liquid nitrogen vapors using a polystyrene box, which has some advantages, as it is cheaper and more manageable. Thus, the protocol established can be improved and made more specific and more appropriate for application under field conditions, which would ultimately help establish a Genome Resource Bank for deer species, making possible not only the genetic improvement of the Iberian red deer but also the preservation of other related endangered subspecies. Institutional Review Board Statement: In this study did not use samples from experimental animals but samples collected from deceased animals that had been previously killed in hunting drives on designated hunting estates. In Spain, hunting is a legally regulated activity, specifically in the Autonomous Community of Castilla -La Mancha under the Law 3/2015 of Hunting activities in Castilla-La Mancha, that replaced the former Law 2/93. Data Availability Statement: The results can be shared on demand. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2018-04-03T05:54:11.583Z
2016-12-19T00:00:00.000
3988802
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-016-1564-2", "pdf_hash": "765d64e78ad544eb59449184a7e413642d7b4195", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41678", "s2fieldsofstudy": [ "Medicine" ], "sha1": "679facb7aecd26d4c608b8c117600d5f61838664", "year": 2016 }
pes2o/s2orc
Early-phase cumulative hypotension duration and severe-stage progression in oliguric acute kidney injury with and without sepsis: an observational study Background Managing blood pressure in patients with acute kidney injury (AKI) could effectively prevent severe-stage progression. However, the effect of hypotension duration in the early phase of AKI remains poorly understood. This study investigated the association between early-phase cumulative duration of hypotension below threshold mean arterial pressure (MAP) and severe-stage progression of oliguric AKI in critically ill patients, and assessed the difference in association with presence of sepsis. Methods This was a single-center, observational study conducted in the ICU of a university hospital in Japan. We examined data from adults with oliguric AKI who were admitted to the ICU during 2010–2014 and stayed in the ICU for ≥24 h after diagnosis of stage-1 oliguric AKI defined in the Kidney Disease Improving Global Outcomes (KDIGO) guidelines. The primary outcome was the progression from stage-1 oliguric AKI to stage-3 oliguric AKI (progression to oligoanuria and use of renal replacement therapy) according to the KDIGO criteria. During the first 6 h after oliguric AKI, we analyzed the association between cumulative time the patient had below threshold MAP (65, 70, and 75 mm Hg) and progression to stage-3. Results Among 538 patients with oliguric AKI, progression to stage-3 increased as the time spent below any threshold MAP was elongated. In the multivariable analysis of all patients, longer hypotension time (3–6 h) showed significant association with stage-3 progression for the time spent below MAP of 65 mm Hg (adjusted odds ratio (OR) 3.73, 95% confidence interval (CI) 1.53–9.09, p = 0.004), but the association was attenuated for the threshold MAP of 70 mm Hg (adjusted OR 2.35, 95% CI 0.96–5.78, p = 0.063) and 75 mm Hg (adjusted OR 1.92, 95% CI 0.72–5.15, p = 0.200). Longer hypotension time with the thresholds of 65 and 70 mm Hg was significantly associated with the risk of stage-3 progression in patients without sepsis, whereas the association was weak and not significant in patients with sepsis. Conclusions Even in a short time frame (6 h) after oliguric AKI diagnosis, early-phase cumulative hypotension duration was associated with progression to stage-3 oliguric AKI, especially in patients without sepsis. Background Acute kidney injury (AKI) is an extremely common burden in intensive care units (ICUs) [1]. When critical illness is complicated by AKI, there is a well-known association with increased mortality [1,2], but preventive measures have not yet been established. Maintenance of normal or even increased mean arterial pressure (MAP) could provide a benefit because autoregulation of blood flow is known to be lost during certain forms of AKI, especially in patients with sepsis [3]. Recent studies [4,5] suggest that maintaining MAP ≥70 mm Hg could better protect patients with sepsis and AKI compared to maintaining MAP ≥65 mm Hg as recommended in the worldwide guidelines for sepsis [6,7]. Other studies indicate that the time elapsed with blood pressure below threshold might be the key indicator of developing postoperative AKI [8,9]. On the other hand, the optimal blood pressure threshold is still a matter of debate. Some studies have failed to indicate the effectiveness of maintaining high MAP in AKI management [10,11]. Blood pressure management strategies to prevent progression of AKI to severe stages need to be established. Some reports exhibit the association between the time spent with blood pressure below threshold MAP and progression of AKI in patients with sepsis [12,13]. However, whether intensivists should also manage blood pressure in patients with AKI without sepsis remains a point of contention [4,5,[12][13][14]. Moreover, it is uncertain whether managing hypotension duration in earlyphase AKI is important in preventing progression to severe-stage AKI. This study focused on oliguria, an easy-to-find marker for early-stage AKI in the ICU. In the early phase of oliguric AKI (6 h after the diagnosis of oliguric AKI), we investigated the association between cumulative hypotension duration below threshold MAP and progression of AKI among critically ill patients with and without sepsis. Study design, setting, and patients This observational study reviewed data from all consecutive patients admitted to the ICU of the Jikei University School of Medicine, Tokyo, Japan, from January 2010 through December 2014. The study protocol was approved by the respective medical institutional review boards of Kyoto University (approval number R0432) and Jikei University (approval number . Because of the retrospective approach of this study and de-identification of personal data, the boards waived the need for informed consent. We examined data from consecutive patients aged ≥18 years who had not undergone maintenance dialysis and who stayed in the ICU for at least 24 h after the diagnosis of oliguric AKI, which was defined as stage-1 oliguric AKI, urine output <0.5 mL/kg/h for 6 h consecutively (Table 1), according to the Kidney Disease Improving Global Outcomes (KDIGO) criteria [15]. If patients with oliguric AKI were admitted to the ICU twice or more during the study period, only data from the first ICU admission were included. We excluded patients in whom blood pressure was not measured with an interval of <1 h during the 24-h period after oliguric AKI diagnosis, and those who died within 72 h after ICU admission. Patients who had been admitted after vascular surgery were excluded because they were frequently treated with higher target levels of blood pressure in the ICU to avoid spinal ischemia and paraplegia [16]. We also excluded patients in whom renal replacement therapy (RRT) was initiated within 6 h of oliguric AKI diagnosis. Data collection Data for analyses including age, sex, body weight, comorbidities, ward type before ICU admission, surgery type for postoperative patients, prevalence of sepsis at ICU admission, length of ICU stay, ICU mortality, and hospital mortality were collected from the ICU database. In this study, sepsis was defined as the presence of known active systemic infection at ICU admission, the presence of shock at ICU admission caused by suspected infection, or positive blood culture sampled at ICU admission. We also collected data on urine output for 7 days after ICU admission or until ICU discharge, blood pressure, and intravenous administration of vasoactive agents (dopamine, dobutamine, norepinephrine, and vasopressin) for 6 h after oliguric AKI diagnosis via the electronic medical record system (PIMS; Philips Japan Ltd.). The time between ICU admission and oliguric AKI diagnosis (time to oliguric AKI) was calculated. Illness severity was assessed using Acute Physiology and Chronic Health Evaluation (APACHE) II scores [17] and Sequential Organ Failure Assessment (SOFA) scores [18] during the first 24-h period after ICU admission. In the ICU, urine output was recorded every 2 h. The MAP was recorded every 15 minutes. We collected fluid balance data, but the time to obtain the fluid balance information was predefined at three points per day (at 5:00, 13:00 and 21:00 h) in the ICU. Therefore, we obtained fluid balance data during the first measurable 8-h period after oliguria started. Although we also obtained serum creatinine at ICU admission, we did not collect baseline serum creatinine data because our prior research shows that more than half of the patients in the ICU did not have baseline creatinine recorded [19]. Outcome measures The primary outcome measure was progression to stage-3 oliguric AKI (progression to oligoanuria and use of renal replacement therapy (RRT)) according to the KDIGO criteria: RRT initiation during the stay in the ICU, urine volume <0.3 mL/kg/h for 24 h consecutively within 7 days after ICU admission, or anuria for 12 h consecutively within seven days after ICU admission (Table 1) [15]. We did not use the criteria for serum creatinine in the KDIGO guidelines, because we specifically examined oliguria as the marker for early diagnosis of AKI. We additionally performed sensitivity analyses by focusing on initiation of RRT as the outcome. Measure of main exposure factors To assess the association between blood pressure and the progression to stage-3 oliguric AKI, we considered the cumulative hypotension duration as the most clinically relevant variable of blood pressure parameters based on recent studies [8,9]. Here, we set three hypotension thresholds: MAP of 65, 70, and 75 mm Hg. The cumulative time below a particular threshold MAP in a 6-h period after oliguric AKI diagnosis was considered as the main exposure factor for stage-3 progression. Stage-3 progression according to the urine output criteria is possible at 6 h after the oliguric AKI diagnosis. Therefore, we stopped data collection on exposure at 6 h after oliguric AKI diagnosis. We also calculated the "time-averaged MAP," "lowest MAP" and "area under threshold MAP" in the 6-h period. Statistical analysis Data were analyzed as medians with interquartile range (IQR) for continuous variables and as proportions for categorical variables, to which the Mann-Whitney U test and Fisher's exact test or chi-squared test were applied, respectively. We first visually described the relationship between the cumulative time spent below each threshold MAP (65, 70, and 75 mm Hg) and progression to stage-3 AKI using restricted cubic splines in univariable logistic regression models. To evaluate the predictability of each blood pressure parameter for stage-3 progression, we drew the area under the receiver operating curve (AUROC). As the primary analysis, multivariable logistic regression models were used to assess the association between the time categories below each threshold MAP and stage-3 progression. Here, the time category was divided into none (0 hours), minimal through 3 h (0-3 h) and 3 through 6 h (3-6 h), and the odds ratios (ORs) and the 95% confidence intervals (CIs) were calculated. The following variables were incorporated into the primary multivariable models: serum creatinine levels at ICU admission, the prevalence of sepsis at ICU admission, and APACHE II scores. Furthermore, we performed subgroup analyses stratified by sepsis and non-sepsis to elucidate the differences in etiology. All statistical analyses were performed using R (The R Foundation for Statistical Computing, ver. 3.30) and EZR (Saitama Medical Center, Jichi Medical University, ver. 1.32), which is a graphical user interface for R [20]. All tests were twotailed; p values <0.05 were regarded as statistically significant. Results The selection process for the study patients is presented in Fig. 1. We extracted 807 patients who stayed in the ICU for ≥24 h after oliguric AKI diagnosis. After excluding 269 patients who met the aforementioned exclusion criteria, we enrolled 538 patients with stage-1 oliguric AKI for our analyses. Progression to stage 3 was observed in 27 (19.6%) of 138 patients with sepsis and 27 (6.8%) of 400 patients without sepsis at ICU admission. Blood pressure parameters in patients with stage-3 progression and those with no progression are presented in Table 3 The longer the time below any threshold MAP continued, the more frequently stage-3 progression occurred ( Fig. 2 and Table 4). Primary multivariable logistic regression models revealed that a hypotension duration of 3-6 h was significantly associated with stage-3 progression when threshold MAP was 65 mm Hg (adjusted OR 3.73, 95% CI 1.53-9.09, p = 0.004); however, such an association was attenuated when the threshold was 70 mm Hg (adjusted OR 2.35, 95% CI 0.96-5.78, p = 0.063) and 75 mm Hg (adjusted OR 1.92, 95% CI 0.72-5.15, p = 0.200) ( Table 4). When the patients were stratified into patients with or without sepsis, the hypotension time of 3-6 h below MAP 65 and 70 mm Hg was significantly associated with stage-3 progression in patients without sepsis (65 mm Hg: adjusted OR 4.53, 95% CI 1.35-15.30, p = 0.015; 70 mm Hg: adjusted OR 4.42, 95% CI 1.03-19.00, p = 0.046), but was weak and not significant in patients with sepsis ( Table 4). The results were similar when we performed sensitivity analyses by focusing on RRT initiation (Table 5). Discussion In this study, we demonstrated that early-phase cumulative hypotension time spent below a particular threshold MAP was associated with progression to stage-3 oliguric AKI (progression to oligoanuria and use of RRT) among critically ill patients with early oliguric AKI, especially in patients without sepsis. This is the first report examining the level and duration of hypotension, and the septic state in patients with early oliguric AKI, and may provide information to guide the management of early-stage AKI in the ICU. Which blood pressure parameter is the best -timeaveraged MAP, the area under threshold MAP, or time below threshold MAP? From the calculated AUROC, the prediction of stage-3 progression was similar. Previous researchers have reported the optimal threshold of blood pressure to prevent AKI by examining timeaveraged MAP [4,5]. However, in our study, although the difference in time-averaged MAP in patients with and without stage-3 progression was statistically significant, we should consider whether the difference was clinically meaningful (71 mm Hg vs. 75 mm Hg) ( Table 3). In addition, the method using time-averaged MAP does not consider variation in blood pressure. Is the method appropriate to investigate the optimal threshold of blood pressure? On the other hand, in patients who underwent non-cardiac surgery, the time spent below intraoperative MAP of 55-60 mm Hg has been strongly associated with increased risk of postoperative AKI [8,9]. The primary outcome in these studies was not progression to severe-stage AKI, and not all patients were admitted to the ICU. Although the outcome and patient profile of these studies are different from those of our study, they indicate the importance of cumulative hypotension time in AKI research. Accordingly, in our study, we examined both area under threshold MAP and time below threshold MAP, considering the importance of hypotension time, and time below threshold MAP seemed comparable with and easier to apply in clinical practice than area under threshold MAP (Table 3). In this study, hypotension below a particular threshold MAP was associated with stage-3 progression. Surviving Sepsis Campaign Guidelines recommend maintaining MAP ≥65 mm Hg in patients with sepsis [6,7], while some earlier reports indicate that maintenance of MAP ≥70 mm Hg would prevent AKI and progression to severe-stage AKI. Badin and colleagues reported that time-averaged MAP of 72-82 mm Hg might be necessary for septic shock patients with AKI defined by serum creatinine, to prevent progression of AKI [4]. On the other hand, a recent large randomized controlled trial comparing mortality, AKI incidence, and RRT initiation between the target MAP of 65-70 mm Hg (low-target Time to oliguric acute kidney injury (AKI), time between ICU admission and oliguric AKI diagnosis. Sequential Organ Failure Assessment (SOFA) scores were calculated using information during the first 24-h period after ICU admission. APACHE Acute Physiology and Chronic Health Evaluation, IQR interquartile range group) and 80-85 mm Hg (high-target group) in patients with septic shock (the SEPSISPAM study) did not support the maintenance of blood pressure much higher than 65 mm Hg [21]. However, it should be noted that MAP in most patients in the low-target group in this trial was actually maintained at higher than 70 mm Hg [21]. In addition, the target MAP of 80-85 mm Hg in the high-target group might have been much higher than necessary. Therefore, some caution would be needed to interpret the trial results. Another important result of our study was the association between early-phase cumulative hypotension time and stage-3 progression among oliguric patients with AKI without sepsis, and the association was weak among patients with sepsis in any threshold MAP. In this study, more than 60% of patients without sepsis were postoperative. The mechanism of progression to severe AKI might be different between patients with sepsis and postoperative patients with AKI. Postoperative AKI might be more sensitive to the continuation of hypotension than septic AKI. Factors other than hypotension might affect the stage progression in septic AKI. Is there a "golden time" to treat early-phase AKI, as in acute myocardial infarction and acute ischemic stroke? We identified an association between cumulative hypotension time and severe oliguric AKI, even in a short time-frame such as 6 h after oliguric AKI diagnosis. A recent study revealed that urine output responsiveness after a furosemide stress test is superior to any recent biomarker in the prediction of severe-stage progression Table 3 Blood pressure parameters within 6 h after oliguric AKI diagnosis Fig. 2 Graphical representation of the relationship between stage-3 progression and cumulative hypotension time below each threshold mean arterial pressure (65, 70, and 75 mm Hg) within 6 h after oliguric acute kidney injury (AKI) diagnosis, using restricted cubic splines in univariable logistic regression models [22]. Another study indicated that oliguric AKI is associated with poor prognosis, even when the serum creatinine level is not increased [23]. These findings, including ours, suggest that urine output might be efficient as a continuous monitor. Early diagnosis of oliguric AKI through continuous urine output monitoring would enable us to initiate earlier treatment of AKI. Effective treatments have still not been established for AKI, but future studies might provide effective procedures including optimal blood pressure levels in patients with early oliguric AKI. If there is a "golden-time" to treat AKI, early diagnosis by urine output and early treatment with blood pressure management would be clinically important. This study has several limitations. First, the control of confounding factors may be insufficient because of the observational study design. It was difficult to obtain data on diabetes mellitus, chronic hypertension, the presence of hypotension before ICU admission, and the exposure to radiocontrast or nephrotoxic agents. Positive fluid balance has recently been a well-known risk factor for patients' prognoses [24][25][26]. In our study, only fluid balance during the first measurable 8-h period after start of oliguria was available. In addition, more than 50% of the included patients were postoperative, but it was difficult to assess retrospectively whether oliguria was due to hypovolemia. Therefore, the results of this study do not directly imply that increasing blood pressure itself has an impact on AKI incidence and progression. Second, we used only urine output to define AKI and the AKI stages. Therefore, our definition of AKI did not strictly follow the definition in the KDIGO criteria. However, it is well-known that preadmission baseline creatinine data are often unavailable in clinical practice [27]. As shown in our previous paper [19], baseline serum creatinine levels were not known among more than 50% of the patients in the ICU despite an effort to obtain the data. In many AKI studies, the serum creatinine back-estimation method has frequently been used for complementing missing data on baseline serum creatinine [28][29][30]. However, Bernardi and colleagues have pointed out that this frequently used method, assuming a "true" glomerular filtration rate of 75 mL/min/1.73 m 2 , is not accurate and that it might have caused misclassification [31]. Therefore, although our current study fundamentally targeted only urine output as a continuous monitor, it could be acceptable that we did not use baseline serum creatinine data. Third, baseline blood pressure measurements could not be obtained in this study. Even in the SEPSISPAM study, which showed no difference in outcomes between the high-target MAP group and the low-target MAP group, the proportion of AKI and RRT among patients with chronic hypertension was lower in the high-target group than in the low-target group [21]. Consequently, baseline blood pressure might be an important co-morbid factor in AKI-related studies. Last, this study was conducted in a single center, and the number of patients was small. Although sepsis has been reported as the leading cause of AKI in the ICU [1], most patients included in this study were patients without sepsis rather than patients with sepsis, who accounted for only 25.7% of the study sample. Therefore, the generalizability of the study might be limited. Conclusions In conclusion, early-phase cumulative hypotension time below a particular threshold MAP was significantly associated with progression to the severe stage among critically ill patients with early oliguric AKI, especially in patients without sepsis. This association was weak in patients with sepsis. More attention should be paid to the length of time spent in hypotension in ICU care. Key messages Early-phase cumulative hypotension time below threshold MAP was associated with stage-3 progression of oliguric AKI, especially in patients without sepsis The association was weak and was not significant in patients with sepsis In future AKI research, it is necessary to further investigate the significance of hypotension duration and the optimal blood pressure threshold Abbreviations AKI: acute kidney injury; APACHE II: Acute Physiology and Chronic Health Evaluation II; AUROC: area under the receiver operating curve; BP: blood pressure; CI: confidence interval; ICU: intensive care unit; KDIGO: Kidney Disease: Improving Global Outcomes; MAP: mean arterial pressure; OR: odds ratio; RRT: renal replacement therapy
v3-fos-license
2017-04-16T11:21:21.122Z
2011-10-15T00:00:00.000
11844024
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://jcs.biologists.org/content/124/20/3428.full.pdf", "pdf_hash": "632ea63aebbb601606876a5ff6c1cfb633582861", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41681", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "6b4ba807232b5b1df211a88f7d1cefbdb6b4dc7b", "year": 2011 }
pes2o/s2orc
BMP2, but not BMP4, is crucial for chondrocyte proliferation and maturation during endochondral bone development The BMP signaling pathway has a crucial role in chondrocyte proliferation and maturation during endochondral bone development. To investigate the specific function of the Bmp2 and Bmp4 genes in growth plate chondrocytes during cartilage development, we generated chondrocyte-specific Bmp2 and Bmp4 conditional knockout (cKO) mice and Bmp2,Bmp4 double knockout (dKO) mice. We found that deletion of Bmp2 and Bmp4 genes or the Bmp2 gene alone results in a severe chondrodysplasia phenotype, whereas deletion of the Bmp4 gene alone produces a minor cartilage phenotype. Both dKO and Bmp2 cKO mice exhibit severe disorganization of chondrocytes within the growth plate region and display profound defects in chondrocyte proliferation, differentiation and apoptosis. To understand the mechanism by which BMP2 regulates these processes, we explored the specific relationship between BMP2 and Runx2, a key regulator of chondrocyte differentiation. We found that BMP2 induces Runx2 expression at both the transcriptional and post-transcriptional levels. BMP2 enhances Runx2 protein levels through inhibition of CDK4 and subsequent prevention of Runx2 ubiquitylation and proteasomal degradation. Our studies provide novel insights into the genetic control and molecular mechanism of BMP signaling during cartilage development. Introduction During skeletal development, the majority of the bones in the body are established by the endochondral bone formation process, which is initiated by mesenchymal cell condensation and subsequent mesenchymal cell differentiation into chondrocytes and surrounding perichondrial cells. The committed chondrocytes proliferate rapidly forming the cartilage growth plate where cells are arranged in columns of proliferating, differentiating and terminally hypertrophic chondrocytes. Chondrocytes near the center of the cartilage elements exit the cell cycle initiating the process of hypertrophic differentiation to generate a calcified cartilage matrix. Eventually, the local vasculature, perichondrial osteoblasts and various other types of cells invade the calcified cartilage, replacing the terminally mature chondrocytes with marrow components and trabecular bone matrix. Primary ossification occurs with osteoblast-mediated bone formation, which initially occurs on the calcified cartilage template. Chondrocyte maturation and the endochondral bone development process is tightly regulated by a series of growth factors and transcription factors, including bone morphogenetic proteins (BMPs), fibroblast growth factors (FGFs), indian hedgehog (Ihh), parathyroid hormone-related protein (PTHrP), Wnt signaling proteins and Runt-related transcription factor 2 (Runx2) (Yoon and Lyons, 2004;Ornitz, 2005;Kronenberg, 2003;Komori, 2003;Kolpakova and Olsen, 2005). BMPs are multi-functional growth factors that belong to the transforming growth factor b (TGF-b) super family. In vivo evidence suggests that BMP signaling is primarily mediated through the canonical BMP-Smad pathway in chondrocytes (Yoon et al., 2005). BMPs bind the type II receptor and phosphorylate type I serine or threonine receptors, which subsequently phosphorylate Smad1, Smad5 and Smad8 (R-Smads). The activated R-Smads form a complex with Smad4 before entering the nucleus to regulate target gene transcription. Several lines of evidence suggest that the BMP-Smad pathway has a crucial role in endochondral bone development. Removal of Bmp2 and Bmp4 specifically from mesenchymal cells leads to defects in skeletal development (Bandyopadhyay, 2006). Deletion of the Smad1 and Smad5 genes or the Bmpr1a and Bmpr1b genes in cartilage results in chondrodysplasia (Yoon et al., 2005;Retting et al., 2009). In addition to the role of BMPs in early mesenchymal cell differentiation (Haas and Tuan, 1999;Hatakeyama et al., 2004), they also have crucial roles during later stages of chondrocyte proliferation and differentiation (Shukunami et al., 2000;Leboy et al., 2001;Valcourt et al., 2002). However, as a result of overlapping and redundant functions among different BMP genes, the regulatory role of these genes during chondrocyte proliferation and maturation in vivo remains undefined. Bmp2 and Bmp4 mRNAs are highly expressed in prehypertrophic and hypertrophic chondrocytes of the growth plate (Feng et al., 2003;Nilsson et al., 2007) (supplementary material Fig. S1). Bmpr1a is highly expressed in pre-hypertrophic chondrocytes, and phosphorylated Smad1, Smad5 and Smad8 proteins are detected in the lower region of proliferating columnar zone and pre-hypertrophic chondrocytes (Sakou et al., 1999;Yoon et al., 2006). The specific expression patterns of these genes suggest an essential role for Bmp2 and/or Bmp4 in chondrocyte proliferation and maturation during endochondral bone development. In vitro studies have shown that BMP2 and BMP4 stimulate the progression of chondrocyte hypertrophy (Hatakeyama et al., 2004;Leboy et al., 1997;De Luca et al., 2001;Minina et al., 2001;Horiki et al., 2004;Clark et al., 2009). Similarly, expression of constitutively active Bmpr1a in chondrocytes induces the acceleration of chondrocyte differentiation into hypertrophic chondrocytes (Kobayashi et al., 2005). These findings suggest that Bmp2 and Bmp4 have a similar and redundant role in chondrocyte maturation. To determine which one of these BMP genes is required for chondrocyte development in vivo, we have generated chondrocyte-specific Bmp2 and Bmp4 cKO mice and Bmp2,Bmp4 dKO mice. Chondrocyte-specific deletion of these BMP genes is achieved by breeding Col2a1CreER T2 transgenic mice with the Bmp2 or Bmp4 floxed mice (Bmp2 fx/fx and Bmp4 fx/fx ). Chondrocyte-specific gene deletion is achieved by intraperitoneal injection of a single dose of tamoxifen (TM) to the pregnant female carrying embryos at embryonic day 12.5 (E12.5). We then assessed changes in chondrocyte maturation in these mutant embryos at E14.5 and E18.5. Our studies demonstrate that deletion of only Bmp2 or both Bmp2 and Bmp4 genes led to severe defects in chondrocyte proliferation and maturation during endochondral bone development. By contrast, chondrocyte-specific deletion of only the Bmp4 gene caused minor changes in chondrocyte maturation. Our findings indicate that Bmp2 has a crucial and non-redundant role in chondrocyte proliferation and maturation during endochondral bone development. Results Deletion of Bmp2 and Bmp4 or Bmp2 alone impairs skeletal development To investigate the role of endogenous Bmp2 and Bmp4 genes in growth plate chondrocyte maturation and skeletal development, pregnant mice with embryos at E12.5 were injected with TM. E18.5 embryos were collected and whole skeletal Alizarin Red and Alcian Blue staining was performed. Whole skeletons and individual skeletal elements of Bmp2 and Bmp4 (Bmp2/4) dKO and Bmp2 cKO embryos were very small compared with their Cre-negative littermate controls, suggesting impaired skeletal development in Bmp2/4 dKO and Bmp2 cKO embryos (Fig. 1A). Calvaria of these mutant embryos were smaller than those of Crenegative littermates, and cartilaginous occipital bones were nearly absent, demonstrating that intramembranous bone formation was also impaired. Compared with Cre-negative littermates, the deformed thoracic cavities of Bmp2/4 dKO and Bmp2 cKO embryos were significantly smaller with minimal bone formation. Spines and hind limbs of Bmp2/4 dKO and Bmp2 cKO embryos were also markedly shorter than Cre-negative littermates (Fig. 1B). However, only minor differences were observed in all skeletal elements analyzed from the Bmp4 cKO embryos compared with Cre-negative littermates, suggesting that the Bmp4 gene has a minor role in normal embryonic skeletal growth and development or is complemented by the expression of other BMP genes (Fig. 1A,B). Formation of primary ossification center is delayed in Bmp2/4 dKO and Bmp2 cKO embryos To further analyze changes in skeletal development in Bmp2/4 dKO and Bmp2 cKO embryos, histological staining was performed on tibias sections of E14.5 Bmp2/4 dKO, Bmp2 and Bmp4 cKO embryos and the Cre-negative littermates. In Crenegative embryos, chondrocytes in the middle of tibia began the differentiation process forming a hypertrophic zone, which stained weakly with Alcian Blue or Safranin O compared with the adjacent immature chondrocytes. In Bmp2/4 dKO embryos, the whole tibia was smaller than that of Cre-negative embryos and chondrocyte hypertrophy was absent, as was evidence of formation of the primary ossification center. Bmp2 cKO embryos showed a very similar delay in the formation of the hypertrophic zone of tibia compared with Bmp2/4 dKO embryos. By contrast, there were minimal changes in the tibiae of Bmp4 cKO embryos when compared with the Cre-negative littermates ( Fig. 2A). Bmp4 cKO embryos had evidence of chondrocyte hypertrophy and formation of the primary ossification center. These findings demonstrated that chondrocyte hypertrophy is severely delayed by deletion of the Bmp2 gene, but not the Bmp4 gene, in Col2a1positive chondrocytes during embryonic skeletal development. Chondrocyte maturation is impaired in Bmp2/4 dKO and Bmp2 cKO embryos Further histological analysis was performed on E18.5 embryos. The results demonstrated that the lengths of proliferative and hypertrophic zones of Bmp2/4 dKO and Bmp2 cKO embryos were significantly reduced with disorganized columnar chondrocyte structure (Fig. 2B). Normal hypertrophic chondrocytes were replaced with smaller number of enlarged hypertrophic chondrocytes, with expansion of both the cytoplasm and nucleus in Bmp2/4 dKO and Bmp2 cKO embryos ( Fig. 2B-D). The reduced size of the growth plate was associated with less endochondral bone formation, although ectopic matrix deposition was observed at the perichondrial region surrounding the abnormal cartilage in Bmp2/4 dKO and Bmp2 cKO embryos (Fig. 2C,D). Because Col2a1CreER T2 mice do not target perichondrial cells , this ectopic matrix formation suggests a non-cell-autonomous effect in Bmp2/4 dKO and Bmp2 cKO embryos. To rule out the toxic effect of TM on embryonic skeletal development, we injected TM in pregnant WT mice with embryos at E12.5. E18.5 embryos were collected and histology staining was performed. No significant difference in skeletal development was found by injection of TM (supplementary material Fig. S2). In L4 vertebrae, hypertrophic chondrocyte area was reduced over 50% in Bmp2/4 dKO and Bmp2 cKO embryos compared with those in Cre-negative embryos. The reduced chondrocyte hypertrophy and decreased matrix deposition observed in the center of the vertebral body Bmp2 and cartilage development 3429 (Fig. 2E,upper panel) indicates that the chondrocyte maturation process is also delayed in vertebral bones in Bmp2/4 dKO and Bmp2 cKO embryos. By contrast, only minor changes in growth plate chondrocyte maturation were found in E18.5 Bmp4 cKO embryos ( Fig. 2B,C,E), suggesting that the expression of the Bmp4 gene is not absolutely required for chondrocyte maturation and cartilage development. To further determine changes in cellular function in growth plate chondrocytes, we performed proliferating cell nuclear antigen (PCNA) staining and terminal deoxynucleotidyl transferasemediated dUTP-biotin nick end labeling (TUNEL) staining using tibia sections of E18.5 embryos. The results of PCNA staining demonstrated that cell proliferation was dramatically reduced in Bmp2/4 dKO and Bmp2 cKO embryos. By contrast, no significant reduction in PCNA-positive proliferating chondrocytes was found in Bmp4 cKO embryos ( Fig. 3A and B). The TUNEL staining images of E18.5 embryos demonstrated that chondrocyte apoptosis was significantly increased in Bmp2/4 dKO and Bmp2 cKO embryos (Fig. 3C). Defects in chondrocyte differentiation in Bmp2/4 dKO and Bmp2 cKO embryos To examine chondrocyte differentiation, we performed in situ hybridization assays using Col2a1, Col10a1 and Mmp13 probes. Col2a1 is highly expressed in growth plate chondrocytes in the resting and proliferating chondrocytes in E18.5 Cre-negative embryos. Opposing Col2a1 expression, Col10a1 is highly expressed in pre-hypertrophic and hypertrophic chondrocytes. Mmp13 is expressed in terminal hypertrophic chondrocytes that are proceeding into the final apoptotic stage where cartilage matrix is degraded and replaced by bone matrix. In this study, we found that the expression of all of these chondrocyte marker genes was significantly reduced in E18.5 Bmp2/4 dKO and Bmp2 cKO embryos by in situ hybridization (Fig. 4A). To further determine changes in chondrocyte marker gene expression, we also performed real-time RT-PCR assays and found that expression of Sox9, Acan (aggrecan) and Col2a1 was significantly reduced in chondrocytes in which the Bmp2 or Bmp2/4 genes were deleted ( Fig. 4B-D). Compared with the changes in gene expression in Bmp2/4 dKO and Bmp2 cKO embryos, there are minor changes in the expression of these chondrocyte marker genes in Bmp4 cKO embryos ( Fig. 4B-D). In this assay, primary chondrocytes were isolated from E18.5 mutant and Cre-negative embryos. It has been reported that BMP-2 regulates itself and several other BMP genes, including Bmp4, Bmp5, Bmp6 and Bmp8a (Harris et al., 1994;Ghosh-Choudhury et al., 1995;Chen et al., 1997;Edgar et al., 2007). The expression of several BMP family members was examined in Bmp2-and Bmp4-deleted chondrocytes. The expression of Bmp5, Bmp7, Bmp8b and Bmp9 were significantly downregulated in the chondrocytes in which the Bmp2 gene was deleted ( Fig. 4E-I). By contrast, no significant change in the expression of these genes was found in Bmp4-deficient chondrocytes ( Fig. 4J-N). These results suggest that these BMP genes are regulated by endogenous BMP2. In addition, we found that Bmp4 expression was upregulated in Bmp2-deficient chondrocytes and Bmp2 expression was upregulated in Bmp4-deficient chondrocytes (supplementary material Fig. S3), suggesting that expression of Bmp4 and Bmp2 genes was regulated by endogenous BMP2 and BMP4. To determine the interaction of Wnt/b-catenin and BMP signaling pathways, we isolated primary sternal chondrocytes from Bmp2/4 fx/fx mice. The cells were infected with Ad-Cre or Ad-GFP (control) and treated with BIO (1 mM), a GSK-3b inhibitor, and Wnt3a (100 ng/ml). We found that BIO-and Wnt3a-induced Alp expression was significantly inhibited in Bmp2/4-deficient chondrocytes (Fig. 4O), suggesting that canonical Wnt/b-catenin signaling may stimulate chondrocyte differentiation partially through a Bmp2/4-dependent mechanism. Significant amounts of ectopic matrix deposition were found in perichondrial areas of Bmp2/4 dKO and Bmp2 cKO embryos. To determine whether bone-specific markers and key transcription factors are upregulated in these areas, we examined Runx2 and Osterix expression by immunocytochemistry. We found that the numbers of Runx2-and Osterix-positive cells, and staining intensity were significantly increased in perichondrial areas of Bmp2/4 dKO and Bmp2 cKO embryos (Fig. 5A,B). By contrast, Runx2 expression in the proliferating and pre-hypertrophic areas was significantly reduced in Bmp2/4 dKO embryos (Fig. 5A). Taken together, the findings suggest that chondrocyte functions are severely impaired when the Bmp2 gene, but not the Bmp4 gene, is deleted in E18.5 Bmp2/4 and Bmp2 mutant embryos. BMP-2 upregulates Runx2 protein levels by downregulation of CDK4 expression It has been well documented that BMP2 induces Runx2 mRNA expression (Chen et al., 1998;Hassan et al., 2006). In the present studies, we examined the effects of BMP2 on Runx2 mRNA and protein expression in chondrocytes. We found that BMP2 induced Runx2 mRNA expression up to 3.5-fold, but enhanced Runx2 protein levels up to 10-fold (Fig. 6A,D,E). These observations suggest that, in addition to its transcriptional regulation, BMP2 also regulates Runx2 expression at the posttranscriptional level. Our previous report demonstrated that Runx2 protein levels are regulated by the ubiquitin-proteasome pathway through a cyclin-D1-CDK4-induced phosphorylation of Runx2 (Shen et al., 2006). To determine whether BMP2 regulates CDK4 expression, we performed western blot analysis and found that BMP2 significantly inhibited CDK4 expression in chondrogenic RCS cells (Fig. 6A). Similarly to BMP2, BMP4 also inhibited CDK4 expression in a time-dependent manner (supplementary material Fig. S4). The BMP2-mediated enhancement of Runx2 protein levels could be partially inhibited by expression of CDK4 in these cells (Fig. 6A). The regulatory role of BMP2 in Runx2 protein degradation was further confirmed by a Runx2 ubiquitylation assay. BMP2 inhibited Runx2 ubiquitylation whereas overexpression of CDK4 partially reversed the inhibitory effect of BMP2 on Runx2 ubiquitylation in RCS cells (Fig. 6B). To further determine the role of CDK4 in regulation of Runx2 protein expression, we transfected Cdk4 siRNA into RCS cells and found that similar to the addition of BMP2, transfection of Cdk4 siRNA also enhanced Runx2 protein levels. Addition of noggin blocked BMP2-induced Runx2 expression. By contrast, noggin had no effect on Cdk4 siRNA-induced upregulation of Runx2 protein (Fig. 6C), suggesting that CDK4 works downstream of BMP2 in regulation of Runx2 protein levels. We further analyzed the doseresponse effect of BMP2 on Runx2 protein levels and demonstrated that BMP2 upregulated Runx2 protein levels in a dose-dependent manner. Overexpression of CDK4 significantly inhibited BMP2-induced upregulation of Runx2 protein (Fig. 6D). Interestingly, we also found that over-expression of CDK4 also inhibited BMP2-induced Runx2 mRNA expression in RCS cells (Fig. 6E). To determine if BMP2 affects cyclin-D1-CDK4 interaction, we performed immunoprecipitation assays in the absence or presence of BMP2. We found that BMP2 significantly inhibited the interaction between cyclin D1 and CDK4 in chondrocytes (Fig. 6F). Addition of noggin abolished Journal of Cell Science 124 (20) 3434 the inhibitory effect of BMP2 on the cyclin-D1-CDK4 interaction (Fig. 6F). These results indicate that BMP2 might prevent Runx2 degradation through downregulation of CDK4 expression and inhibition of cyclin-D1-CDK4 interaction in chondrocytes. It has been reported that Sox9 interacts with Runx2 and inhibits Runx2 function (Akiyama et al., 2004). In this study, we examined the effect of Sox9 siRNA on Runx2 levels in chondrogenic RCS cells. We found that transfection of Sox9 siRNA upregulated basal and BMP2-induced Runx2 protein levels in RCS cells (supplementary material Fig. S5). These results suggest that BMP2-regulated Runx2 expression is not Sox9 dependent. Taken together, these results suggest that part of the effect of BMP2 on Runx2 upregulation is mediated through downregulation of CDK4 expression and subsequent inhibition of Runx2 ubiquitylation in chondrocytes. Discussion In vitro studies suggest that BMP2 and BMP4 have similar functions. For example, both BMP2 and BMP4 induce mouse embryonic stem cell and human mesenchymal stem cell differentiation into chondrocytes (Kramer et al., 2000;Steinert et al., 2009). BMP2 and BMP4 also stimulate chondrocyte proliferation and hypertrophy (De Luca et al., 2001;Minina et al., 2001;Hatakeyama et al., 2004;Leboy et al., 1997). The expression of BMP2 and BMP4 proteins is detected in chondrocytes during endochondral ossification in fracture callus with the strongest expression detected in hypertrophic chondrocytes (Yu et al., 2010). During ectopic bone formation induced by implantation of Saos-2 cells into nude mice, both BMP2 and BMP4 are upregulated in mature chondrocytes (McCullough et al., 2007). Overexpression of Bmp2 or Bmp4 induces ectopic bone formation through a mechanism that is similar to endochondral ossification (Alden et al., 1999;Kubota et al., 2002;Jane et al., 2002). Because in vivo environments are different from in vitro studies, the in vitro findings need to be confirmed through an in vivo approach. Homozygous Bmp2 mutant embryos (conventional deletion of the Bmp2 gene) die between E7.5 and E10.5 and have defects in cardiac development (Zhang and Bradley, 1996); whereas homozygous Bmp4 mutant embryos (conventional deletion of the Bmp4 gene) die between E6.5 and E9.5, and show little or no mesodermal differentiation (Winnier et al., 1995). Because skeletal development begins around E10.5-E11.5, Bmp2 and Bmp4 conventional KO mouse models cannot be used to study skeletal biology. Deletion of the Bmp2 or Bmp4 gene specifically in the limb bud mesenchyme leads to severe chondrodysplasia, suggesting crucial roles of both Bmp2 and Bmp4 in early mesenchymal cell differentiation (Bandyopadhyay et al., 2006). To determine the specific functions of Bmp2 and Bmp4 in chondrocyte proliferation and maturation during endochondral bone development in vivo, we have generated chondrocytespecific Bmp2 and Bmp4 cKO mice and Bmp2/4 dKO mice using Col2a1CreER T2 transgenic mice in which the expression of the CreER transgene is induced by tamoxifen and is restricted to cartilage. In our studies, deletion of the Bmp2/4 or Bmp2 gene in Col2a1-expressing chondrocytes resulted in severe defects in endochondral bone development, which differs from the results obtained by deletion of the Bmp2 and Bmp4 genes in mesenchymal progenitor cells (mediated by Prx1Cre transgenic mice). These findings suggest that during early mesenchymal cell differentiation, functions of Bmp2 and Bmp4 might be at least partially compensated by each other. However, during late stage chondrocyte maturation, Bmp2 function cannot be compensated by Bmp4 or other BMP genes in chondrocytes. In postnatal Bmp2 cKO mice (mediated by Prx1Cre), the fracture healing process is delayed. However, the fracture healing process was not affected in Bmp4 cKO mice (mediated by Prx1Cre) (Tsuji et al., 2006). In Bmp2/4 dKO mice and Bmp2 cKO embryos, both chondrocyte proliferation and maturation are impaired. Chondrocyte columns in the proliferating and hypertrophic zones are disorganized with a dramatic decrease in Col2a1 and Col10a1 expression. In Bmp2/Bmp4 dKO and Bmp2 cKO embryos, ectopic bone formation was observed in perichondrial areas with enhanced Runx2 and Osterix expression. Because Col2a1CreER T2 mice do not target perichondrial cells, it seems that this ectopic bone formation reflects the secondary effect of deletion of the Bmp2 gene. In contrast to the perichondrial area, Runx2 expression in the proliferating and pre-hypertrophic areas was significantly reduced in Bmp2/4 dKO embryos. To investigate the regulatory mechanism of BMP2 on Runx2 expression, we examined the effect of BMP2 on Runx2 mRNA and protein levels in chondrogenic RCS cells. In addition to its stimulatory effect on Runx2 mRNA expression, BMP2 had much greater effect on Runx2 protein levels than its effect on Runx2 mRNA expression. Our in vitro studies demonstrate that BMP2 prevents Runx2 protein ubiquitylation through downregulation of CDK4 expression and inhibition of cyclin-D1-CDK4 interaction. Sox9 is an important downstream mediator of the BMP2 and hedgehog signaling pathways in osteoblasts. A Smad responsive element responsible for BMP2 activation was identified in the Sox9 promoter (Pan et al., 2008). Sox9 expression is upregulated by BMP2 in mesenchymal progenitor cell line (Zehentner et al., 1999). It has also been reported that Sox9 interacts with Runx2 and inhibits Runx2 function (Akiyama et al., 2004). In the present studies, we examined the effect of Sox9 siRNA on Runx2 protein In situ hybridization demonstrates that Col2a1, Col10a1 and Mmp13 expression is significantly reduced in E18.5 Bmp2/4 dKO and Bmp2 cKO embryos. (B-D) Total RNA was extracted from primary chondrocytes isolated from E18.5 Cre-negative control, Bmp2 cKO, Bmp4 cKO and Bmp2/4 dKO embryos. The expression of Sox9, Acan and Col2a1 genes was analyzed by real-time PCR. Results demonstrate that the expression of these chondrocyte marker genes is significantly reduced in Bmp2/4 dKO and Bmp2 cKO chondrocytes. By contrast, the expression of Sox9 and Acan was slightly but significantly reduced in Bmp4 cKO embryos. (E-I) Total RNA was extracted from primary chondrocytes derived from E18.5 Bmp2 cKO and Cre-negative embryos. The expression of BMP genes was analyzed by real-time PCR. Results demonstrated that the expression of Bmp5, Bmp7, Bmp8b and Bmp9 genes was significantly reduced in Bmp2deficient chondrocytes. (J-N) Total RNA was extracted from primary chondrocytes derived from E18.5 Bmp4 cKO and Cre-negative embryos. The expression of BMP genes was analyzed by real-time PCR. Results demonstrated that the expression of Bmp5, Bmp7, Bmp8b and Bmp9 genes is not significantly changed in Bmp4-deficient chondrocytes. (O) Primary sternal chondrocytes were isolated from 3-day-old Bmp2/4 fx/fx mice and were infected with Ad-Cre or Ad-GFP (control). 48 hours after infection, cells were treated with BIO (1 mM) or Wnt3a (100 ng/ml). Cell cultures were stopped 24 hours later and total RNA was extracted and expression Alp was examined by real-time PCR. BIO or Wnt3a-induced Alp upregulation is significantly inhibited in the Bmp2/4-deficient chondrocytes (Ad-Cre infected cells). *P,0.05, unpaired Student's t-test, n53. Values are means + s.e.m. levels and found that silencing of Sox9 upregulated Runx2 protein levels, suggesting that Sox9 has an inhibitory effect on Runx2, and that BMP2-mediated Runx2 upregulation is Sox9 independent. BMP2 induces the expression of molecular marker genes characteristic of hypertrophic chondrocytes, such as Col10a1 and Alp (Valcourt et al., 2002). When BMP signals are transduced through R-Smads, Smad1 can interact with the transcription factor Runx2 . It has been reported that BMP2 promotes Col10a1 and Smad6 gene transcription through the conserved Runx2 binding sites (Leboy et al., 2001;Zheng et al., 2003;Wang et al., 2007). Our studies suggest that BMP2 might regulate Col10a1 expression through Smad1-Runx2 interaction at the 59 promoter region of the Col10a1 gene. Fig. 6. BMP2 protects Runx2 protein degradation through inhibition of CDK4 expression. (A) CDK4 expression construct was transiently transfected into chondrogenic RCS cells. 24 hours after transfection, the cells were treated with BMP2 (100 ng/ml) for 24 hours. Runx2 and CDK4 protein expression was detected by western blotting. BMP2 inhibits CDK4 expression and enhances Runx2 protein level. Expression of CDK4 partially inhibits BMP2-induced Runx2 upregulation. (B) Runx2 ubiquitylation assay. CDK4 expression construct was transiently transfected into RCS cells. 24 hours after transfection, the cells were treated with BMP2 (100 ng/ml) for 24 hours. Proteasome inhibitor MG132 (10 mM) was added to the medium 4 hours before cell lysates were collected. Ubiquitylated proteins were pulled down using an UbiQapture-Q kit and polyubiquitylated Runx2 was detected using an anti-Runx2 antibody. BMP2 inhibits Runx2 ubiquitylation and expression of CDK4 partially reverses the inhibitory effect of BMP2 on Runx2 ubiquitylation. (C) Cdk4 siRNA was transfected into RCS cells. Cells were treated with BMP2 (100 ng/ml) with or without noggin (300 ng/ml) 24 hours after Cdk4 siRNA transfection. The expression of Runx2 and CDK4 protein was detected by western blotting. Silencing of CDK4 results in an upregulation of Runx2 protein levels. Noggin inhibits the effect of BMP2 on Runx2 upregulation but could not inhibit the effect of Cdk4 siRNA on Runx2 protein upregulation. (D) CDK4 expression construct was transiently transfected into RCS cells. Cells were treated with BMP2 for 24 hours at different concentrations (0, 20, 100, 200 ng/ml) 24 hours after CDK4 transfection. The expression of Runx2 and CDK4 protein was detected by western blotting. BMP2 upregulates Runx2 protein levels in a dose-dependent manner. Expression of CDK4 partially inhibits BMP2-induced Runx2 protein upregulation. (E) RCS cells were treated with different concentrations of BMP2 with or without transfection of CDK4. Runx2 mRNA expression was determined by real-time PCR. CDK4 partially inhibits BMP2-induced Runx2 mRNA expression. (F) RCS cells were transfected with cyclin D1 and CDK4 expression plasmids and treated with BMP2 (100 ng/ml). Cell lysates were collected and subjected to IP using an anti-cyclin-D1 or anti-CDK4 antibody followed by western blotting using the anti-CDK4 or anti-cyclin-D1 antibody. Treatment of BMP2 inhibits cyclin-D1-CDK4 interaction and addition of noggin blocks the inhibitory effect of BMP2 on cyclin-D1-CDK4 interaction. Our previous observations and reports from other laboratories demonstrated that both Bmp2 and Bmp4 genes are expressed in chondrocytes during embryonic development and early postnatal stages at similar levels (Feng et al., 2003) (supplementary material Fig. S1). Thus, the phenotypic difference of skeletal development observed in Bmp2 and Bmp4 cKO embryos could not be explained by the expression patterns of these two genes in chondrocytes. Previous reports suggest that BMP2 regulates the expression of other BMP family members in mesenchymal progenitor cells, osteoblasts and chondrocytes (Harris et al., 1994;Ghosh-Choudhury et al., 1995;Chen et al., 1997;Ghosh-Choudhury et al., 2001;Edgar et al., 2007), suggesting that BMP2 serves as an upstream regulator of other BMP genes in chondrocytes. In terms of the mechanism by which BMP2 and BMP4 regulate chondrocyte maturation, the main difference between these two growth factors is that BMP2 controls Bmp2 and expression of other BMP genes through autocrine and paracrine regulatory mechanisms. This notion is supported by several lines of evidence. (1) Our previous studies demonstrate that BMP2 upregulates Bmp2 gene transcription through the Bmp2 proximal promoter element (Ghosh-Choudhury et al., 2001). (2) In a fracture healing study, it has been shown that Bmp2 expression reaches its maximal level at day 1 after fracture. Gdf5 showed maximal expression at day 7. The expression of Bmp4, Bmp7 and Bmp8 was detected from day 14 to day 21, whereas Bmp5, Bmp6 and Gdf10 were expressed from day 3 to day 21 (Cho et al., 2002). (3) It has been reported that in a bone marrow cell culture, addition of BMP2 neutralizing antibody reduced the expression of endogenous levels of BMP2, BMP3, BMP5 and BMP8a, whereas addition of BMP2 had the opposite effect (Edgar et al., 2007). (4) In the present studies, we demonstrated that expression of Bmp5, Bmp7, Bmp8b and Bmp9 was downregulated in Bmp2-deficient chondrocytes. By contrast, expression of these BMP genes was not significantly changed in the Bmp4-deficient chondrocytes. The phenotype of Bmp2 cKO embryos is similar to that of Smad1/ 5 dKO embryos (Retting et al., 2009) and Bmpr1a/Bmpr1b dKO embryos (Yoon et al., 2005), including impaired skeletal development, disorganized growth plate formation, decreased cartilage matrix deposition, and decreased chondrocyte proliferation and increased chondrocyte apoptosis in the growth plate. Ectopic matrix deposition at the perichondrial area surrounding prehypertrophic and hypertrophic chondrocytes was also seen in Smad1/5 dKO embryos, which is consistent with what we observed in Bmp2 cKO mice. However, skeletal development was more severely impaired in Smad1/5 dKO embryos and Bmpr1a/ Bmpr1b dKO embryos compared with Bmp2 cKO embryos. One possibility is that Bmp2 cKO was induced at stage E12.5 in our studies. By contrast, the Smad1/5 and Bmpr1a/Bmpr1b genes were deleted earlier than E12.5, which could have led to more severe defects in chondrogenesis. In summary, our findings indicate that Bmp2 is required for chondrocyte maturation and endochondral bone formation during embryonic development. Materials and Methods Generation of Bmp2/4 dKO mice and Bmp2 and Bmp4 cKO mice Col2a1-CreER T2 mice were generated in our lab . Bmp2 fx/fx mice were generated in the lab of Stephen Harris at the University of Texas Health Science Center at San Antonio, TX (Singh et al., 2008) and Bmp4 fx/fx mice were a gift from Brigid Hogan (Duke University, town, state) (Kulessa and Hogan, 2005;Gluhak-Heinrich et al., 2010). To generate KO embryos, Bmp2 fx/fx ;Bmp4 fx/fx , Bmp2 fx/fx and Bmp4 fx/fx mice were crossed with Col2a1-CreER T2 ;Bmp2 fx/fx ;Bmp4 fx/fx , Col2a1-CreER T2 ;Bmp2 fx/fx and Col2a1-CreER T2 ;Bmp4 fx/fx mice, respectively. The pregnant mice were injected with tamoxifen (1 mg/10 g body weight, i.p.) at E12.5 and sacrificed at E14.5 and E18.5. The Cre-positive embryos were used as KO embryos and the Cre-negative littermates were used as controls. Whole embryo Alizarin Red and Alcian Blue staining Embryos at E18.5 were collected and the skin, viscera and adipose tissues were carefully removed. Whole skeletons were fixed in 95% ethanol for 2 days followed by fixation in acetone for an additional day, and stained with 0.015% Alcian Blue and 0.005% Alizarin Red for 3 days. Images of the skeletons were taken when most of the soft tissue was digested in 1% potassium chloride. Histological analysis Tibiae from E14.5 and E18.5 embryos and vertebrae from E18.5 embryos were fixed in 4% paraformaldehyde, decalcified, dehydrated and embedded in paraffin. Serial midsagittal sections (3 mm thick) of tibias and vertebrae were cut and stained with Alcian Blue and hematoxylin and eosin or Safranin O and Fast Green, respectively, for morphometric analysis. PCNA staining Paraffin sections (3 mm thick) of E18.5 tibias were rehydrated and blocked in 3% H 2 O 2 in methanol for 15 minutes and digested in Proteinase K (10 mg/ml) for 10 minutes at room temperature. PCNA staining was performed with a PCNA staining kit (Promega, WI). TUNEL staining Rehydrated paraffin sections (3 mm thick) of E18.5 tibias were fixed with 4% formaldehyde solution in PBS for 15 minutes followed by digestion in Proteinase K (10 mg/ml) for 10 minutes. TUNEL staining was performed using Fluorometric TUNEL System (Promega, WI). After mounting with DAPI reagent to stain nuclei, the samples were analyzed under a fluorescence microscope using a standard fluorescein filter set to view the green fluorescence of fluorescein at 520¡20 nm; and view blue DAPI fluorescence at 460 nm. In situ hybridization Radiolabeled probes for Col2a1, Col10a1 and Mmp13 were created by transcribing linearized antisense complementary deoxyribonucleic acid (DNA) in the presence of [ 35 S]UTP using T7 polymerases at 37˚C for 2 hours. DNA was removed with RNAse-free DNase. The labeled RNA was purified using a mini Quick Spin RNA Column. All probes have been previously characterized (Dong et al., 2010). Sections (5 mm thick) of E18.5 tibias were prepared. After dewaxing and rehydration, sections were pretreated with 10 mg/ml Proteinase K, 0.2N hydrogen chloride and 0.1 M triethanolamine at room temperature. Hybridization was performed at 55˚C for 18 hours. Non-specific binding was reduced by adding 10 mg/ml RNase A and several washes in SSC. After dipping in nuclear-type emulsion emulsion, the slides were exposed for 3-7 days at 4˚C followed by developing and fixation with Kodak developer and fixer. The slides were counterstained with Toluidine Blue, dehydrated and coverslipped. Primary chondrocyte isolation Three-day-old neonatal mice were euthanized and genotyped using tail tissues obtained at the time of death. The anterior rib cage and sternum were harvested, washed with phosphate buffered saline (PBS), and then digested with 2 mg/ml Pronase (Roche) in PBS in a 37˚C water bath with continuous shaking for 60 minutes. This was followed by incubation in a solution of 3 mg/ml collagenase D (Roche)/Dulbecco's modified Eagle's medium (DMEM) for 90 minutes at 37˚C. The soft tissue debris was thoroughly removed. The remaining sterna and costosternal junctions were further digested in a fresh collagenase D solution in Petri dishes in a 37˚C incubator for 5 hours with intermittent shaking. The digestion solution was filtered to remove all residual bone fragments, and centrifuged. The cells were washed and collected for RNA analysis. Cell culture and transfection Rat chondrosarcoma (RCS) cells were cultured in DMEM supplemented with 10% fetal calf serum at 37˚C under 5% CO 2 . DNA plasmids were transiently transfected into RCS cells in 6 cm culture dishes using Lipofectamine 2000 (Invitrogen, Carlsbad, CA). Empty vector was used to keep the total amount of transfected DNA constant in each group in all experiments. FLAG-EGFP plasmid was cotransfected as an internal control for transfection efficiency. Western blot and immunoprecipitation (IP) assays were performed 24 hours after transfection. Western blotting and ubiquitylation assay Western blotting and in vivo ubiquitylation assay were performed as described previously (Shen et al., 2006;Zhang et al., 2010). For the Runx2 ubiquitylation assay, the proteasome inhibitor MG132 (10 mM) was added to the cell culture 4 hours before cells were harvested. The rat anti-Runx2 monoclonal antibody was purchased from Marine Biological Laboratory (MBL, town, MA). The rabbit anti-CDK4 (C-22) polyclonal antibody and the rabbit anti-ubiquitin (FL-76) polyclonal antibody were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Real-time quantitative polymerase chain reaction (qPCR) Total RNA was extracted from RCS cells and primary mouse chondrocytes using RNAzol B solution (Tel-Test, town, TX). DNAse-I-treated total RNA was reverse transcribed-using oligo-(dT) and cDNA was amplified by PCR in a total volume of 20 ml solution containing 10 ml SYBR Green Master Mix (Thermo Scientific), 1 ml of the diluted (1:5) cDNA, and 10 pM of forward and reverse primers specific for the genes listed in supplementary material Table S1.
v3-fos-license
2021-10-15T15:23:18.795Z
2021-10-12T00:00:00.000
241732164
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1073/14/20/6535/pdf?version=1634115858", "pdf_hash": "c8f085f834335f2ef14ffe2405550b5c0476a9e0", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41682", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "52848f9fe442faceaceb859afea1b74a34a20739", "year": 2021 }
pes2o/s2orc
Reliability-Oriented Design of a Solar-PV Deployments : Increasing restrictions on the emission of greenhouse gases by the standards and the European Union’s policy aims at increasing the share of renewable energy sources in the energy mix of the Member States. Subsequently, we observe a rapid increase in the installed capacity of the renewable energy sources. Renewable energy sources are currently the fastest growing sectors of energy generation, specifically the photovoltaic sector. In 2005, the total installed capacity in photovoltaic installations in the European Union was about 2.17 GW, while in 2019 it was already over 130 GW. Currently, due to many forms of incentive governmental measures the construction of photovoltaic installations is rapidly increasing with installations mounted on private houses and buildings. The article presents selected issues concerning the failure modes of photovoltaic installations and a comparative assessment of the estimated and the real measured electrical production of an operational photovoltaic installation. The Solar-PV power plant design approach proposed in the paper considers the failure modes to enhance the plant’s reliability. Introduction The continuous development of modern society creates an ever increasing demand of electricity, since the WW-II. Satisfying this growing demand relying only on fossil fuels such as coal or oil is not a sustainable clean energy option, as fossil fuels are depleted and cause pollution [1,2]. Resource depletion and increasing pollution are major threats to the modern society. Renewable energy sources present a formidable opportunity to sustain the continuous development of the modern society and to dissipate the threats induced by the fossil energy [3]. Figure 1 shows the percentage share of renewable energy in the total energy production in the European Union in 2005-2019. Within 14 years, the share of renewable energy has doubled, from 10% in 2005 to almost 20% in 2019. Sweden has the highest percentage of renewable energy among the Member States of the European Union. In 2019, 56.4% of Sweden's energy was produced using renewable energy sources (hydropower, wind, photovoltaic and biofuels). Of all European countries, including those not belonging to the European Union, Iceland has the highest percentage share of renewable energy-78.2% (hydropower and geothermal). Despite the fact that many countries may not have achieved the required target, the total share of renewable energy in the European Union has achieved its target of 20% share in energy production. The next step will be to improve energy efficiency by 2030. The goals set for the Member States will be: reducing greenhouse gas emissions by at least 40% compared to 1990, increasing the share of renewable energy sources to 32% and increasing energy efficiency by at least 32.5%. Stand-alone photovoltaic systems are the most economic and ecological power generation solutions for both remote places from the power grid and in cities where connection to the grid is associated with high installation connection costs. The use of a stand-alone photovoltaic system for power supply is particularly beneficial for various types of lighting and emergency telephones on highways, navigation buoys, lighthouses, telecommunications relay stations or weather stations. Energy from stand-alone photovoltaic systems is clean, quiet and reliable energy. According to Directive 2009/28/EC of the European Parliament and the European Council, the aim for 2020 was to achieve a 20% share of renewable energy in total energy production and a 10% share of renewable sources in EU transport sectors. In order to achieve a 20% share of renewable energy in the total energy production in the European Union, individual Member States have set up specific targets that should be achieved by 2020. Figure 2 shows the targets set up by specific Member States and their degree of fulfilment in 2019. Sweden, Finland, Slovakia, Romania, Lithuania, Latvia, Cyprus, Italy, Greece, Estonia, Denmark, the Czech Republic and Bulgaria have already met the targets. Portugal, Austria, Hungary and Germany were close to meeting the targets in 2019 (less than 1% to meet the target). Despite the fact that many countries may not have achieved the required target, the total share of renewable energy in the European Union has achieved its target of 20% share in energy production. The next step will be to improve energy efficiency by 2030. The goals set for the Member States will be: reducing greenhouse gas emissions by at least 40% compared to 1990, increasing the share of renewable energy sources to 32% and increasing energy efficiency by at least 32.5%. Stand-alone photovoltaic systems are the most economic and ecological power generation solutions for both remote places from the power grid and in cities where connection to the grid is associated with high installation connection costs. The use of a stand-alone photovoltaic system for power supply is particularly beneficial for various types of lighting and emergency telephones on highways, navigation buoys, lighthouses, telecommunications relay stations or weather stations. Energy from stand-alone photovoltaic systems is clean, quiet and reliable energy. The EU 2020 Climate & Energy package targeted an increase in the share of renewable energy sources up to 20% in EU countries [5]. Therefore, investments in renewable energy sources are necessary [6][7][8][9][10][11][12][13][14]. Solar energy is a major renewable source to generate electricity through photovoltaic cells [15][16][17]. Due to various types of subsidies, a large increase in installed capacity in photovoltaic installations has been recorded [18]. Figure 3 shows a rapid increase in the photovoltaic installed capacity in the European Union in the period 2005-2019. The EU 2020 Climate & Energy package targeted an increase in the share of renewable energy sources up to 20% in EU countries [5]. Therefore, investments in renewable energy sources are necessary [6][7][8][9][10][11][12][13][14]. Solar energy is a major renewable source to generate electricity through photovoltaic cells [15][16][17]. Due to various types of subsidies, a large increase in installed capacity in photovoltaic installations has been recorded [18]. Figure 3 shows a rapid increase in the photovoltaic installed capacity in the European Union in the period 2005-2019. The total photovoltaic installed capacity in the EU countries is 130.67 GW in 2019. The largest installed capacity is in Germany with 49 GW, which accounts for almost 38% of the total energy in photovoltaic installations in the European Union [19][20][21]. Despite the growing interest in photovoltaics in the EU Member States, almost 80% of the installed capacity belongs to only five countries: Germany, Italy, Great Britain, France and Spain. The nominal operation of a photovoltaic installation depends on the available quantity of solar energy. The availability of the insolation energy is governed by many atmospheric parameters such as, the sun azimuth angle, ambient temperature, air-carried dust and clouds. An indicator specifying the random nature of the clouds effect is taken into The total photovoltaic installed capacity in the EU countries is 130.67 GW in 2019. The largest installed capacity is in Germany with 49 GW, which accounts for almost 38% of the total energy in photovoltaic installations in the European Union [19][20][21]. Despite the growing interest in photovoltaics in the EU Member States, almost 80% of the installed capacity belongs to only five countries: Germany, Italy, Great Britain, France and Spain. The nominal operation of a photovoltaic installation depends on the available quantity of solar energy. The availability of the insolation energy is governed by many atmospheric parameters such as, the sun azimuth angle, ambient temperature, air-carried dust and clouds. An indicator specifying the random nature of the clouds effect is taken into account, as it directly affects the level of the solar energy received by the photovoltaic panels [22,23]. The performance analysis of a specific photovoltaic installation requires the creation of a set of dedicated models to describe different functions in the installation [24][25][26]. A photovoltaic installation usually consists of: photovoltaic panels, an inverter and power electronics systems that control the installation operation [27][28][29]. Since the output power of a PV installation depends on the random nature of the clouds effect, the installation cannot be represented by static two-state or multi-state models, as these models assume a constant value of the generated power [30]. One of the primary errors in designing photovoltaic installations comes from the lack of the load-bearing roof or the geological soil expertise, depending on whether it is a roof or ground-mounted installation. This results in a decrease in the structure strength and exposes the installation structure to damage. Another error in design is due to the lack of an extensive analysis of the shading effects from neighbouring objects on the installation. Shading of even a small area can shut down part of the installation. For the design of photovoltaic installations, numerical simulation is largely used. It enables designers to predict the electrical energy production and to optimise the installation design [31]. Numerical simulation uses a variety of software and big databases that contain regional operational insolation data over many years. Computer simulations can also prevent design errors that are the major causes of almost all the failure modes like installation fire, anormal operation conditions or cable insulation damage [32,33]. The paper describes, in Section 1, the general lines of the methodology of the study. In Section 2, the paper presents a general description of the study area. In Section 3, the paper presents identification and technical description of the photovoltaic installation and its simulations. In Section 4, the paper presents the principal conclusions and synthesis of the comparative assessment of the estimated and the real measured electrical production of an operational photovoltaic installation. Description of Study Area The distinguished installation is located in Subcarpathian province, Poland. The commune is located on the right bank of the Vistula, in the south-eastern part of Poland, it covers a total area of 921 square kilometres and lies in 50 • 05 08.0 N and 22 • 01 52.7 E as shown in the Figure 4. Commune has an urbanisation rate of 233 inhabitant/km 2 and is laid of 200 MASL. The area of the commune is dominated by arable lands, which constitutes as much as 65% of the city's area and forest land is about 13%. The rest of the area is occupied by urban areas and industrial areas. The city is located in the climatic zone of lowlands and sub montane foothills. This area is characterised by hot summers, relatively small amounts of rainfall of about 600 mm and not severe winter. occupied by urban areas and industrial areas. The city is located in the climatic zone lowlands and sub montane foothills. This area is characterised by hot summers, relative small amounts of rainfall of about 600 mm and not severe winter. Photovoltaic Installation Simulations The design of the photovoltaic installation requires, first, to choose whether the stallation will be connected to the grid or not. The designed installation will be connect to the low voltage grid through a bidirectional meter, which enables the surplus of ener to be transferred to the power grid. The next step is to determine the exact location of t installation. On this basis, the program selects the insolation values corresponding to t location. In the next step, you can choose whether the installation will have an ener storage in the form of batteries that will store the energy surplus. In the designed inst lation, the surplus will be transferred to the grid, therefore the installation will not ha an energy storage. In the next stage, you need to create a building model, select photov taic panels, determine their slope and select an appropriate inverter. At the design stag attention should be paid to whether there are any objects on the roof or in the vicinity the building that could shade the installation. This can have a significant impact on t Photovoltaic Installation Simulations The design of the photovoltaic installation requires, first, to choose whether the installation will be connected to the grid or not. The designed installation will be connected to the low voltage grid through a bidirectional meter, which enables the surplus of energy to be transferred to the power grid. The next step is to determine the exact location of the installation. On this basis, the program selects the insolation values corresponding to the location. In the next step, you can choose whether the installation will have an energy storage in the form of batteries that will store the energy surplus. In the designed installation, the surplus will be transferred to the grid, therefore the installation will not have an energy storage. In the next stage, you need to create a building model, select photovoltaic panels, determine their slope and select an appropriate inverter. At the design stage, attention should be paid to whether there are any objects on the roof or in the vicinity of the building that could shade the installation. This can have a significant impact on the installation efficiency. After the simulation is completed, the program presents the detailed results regarding the installation efficiency, shading effects and energy losses. A photovoltaic installation was designed for an office building located in south-eastern Poland. It consists of 16 panels with a capacity of 265 Wp, giving a total output power of 4.24 kWp. The BlueSol software was used to perform the simulation. Figure 5 shows the values of insolation for the location of the designed building, on the basis of which the program calculates the energy produced by the photovoltaic panels. The data source is the NASA-SSE database. The NASA database is based on measurements over a period of 22 years, in the years 1983-2005. ern Poland. It consists of 16 panels with a capacity of 265 Wp, giving a total output power of 4.24 kWp. The BlueSol software was used to perform the simulation. Figure 5 shows the values of insolation for the location of the designed building, on the basis of which the program calculates the energy produced by the photovoltaic panels. The data source is the NASA-SSE database. The NASA database is based on measurements over a period of 22 years, in the years 1983-2005. Taking into account the average daily intensity of solar radiation and the number of days in a year, the program determined total annual insolation as 1070 kWh/m 2 . The highest value of insolation was in May and amounts to 153.14 kWh/m 2 , while the lowest value was in December and amounts to 24.18 kWh/m 2 . Figure 6 shows the designed installation as described in the BlueSol program. Due to the flat roof, the photovoltaic panels are designed on metal structures at an angle of 13 degrees. Taking into account the average daily intensity of solar radiation and the number of days in a year, the program determined total annual insolation as 1070 kWh/m 2 . The highest value of insolation was in May and amounts to 153.14 kWh/m 2 , while the lowest value was in December and amounts to 24.18 kWh/m 2 . Figure 6 shows the designed installation as described in the BlueSol program. Due to the flat roof, the photovoltaic panels are designed on metal structures at an angle of 13 degrees. Figure 7 shows the azimuth of the designed building and the path of the sun (blue line), which shows the altitude of the sun during the year, which makes it possible to calculate the shading of the installation by neighbouring buildings, trees, or chimneys. Figure 7 shows the azimuth of the designed building and the path of the sun (blue line), which shows the altitude of the sun during the year, which makes it possible to calculate the shading of the installation by neighbouring buildings, trees, or chimneys. Based on the design, simulations were performed. Figure 8 shows the monthly energy production calculated by the program BlueSol. Based on the design, simulations were performed. Figure 8 shows the monthly energy production calculated by the program BlueSol. The total energy obtained during the year is 4244.8 kWh. The energy obtained from one kWp of installed capacity is 1001.1 kWh/kWp. The largest energy production takes place in May and amounts up to 578.7 kWh, while the least produced energy is in December and goes down to 114.9 kWh. After designing the photovoltaic system and simulation, the installation was performed on the building. Figure 9 shows pictures of the completed installation. The installation was realised in 2019, with a total insolation surface of 26.24 m 2 . Measurements of the produced electrical energy were carried out throughout 2020 ( Figure 10). The total energy produced in 2020 was 4810.5 kWh. Converted to one kWp of installed power, the production amounted to 1134.5 kWh/kWp. Compared to the simulation expectations, the amount of the produced energy is higher by 13% (565.7 kWh more than in simulations). Figure 11 shows the results of solar radiation measurements in individual months of 2020, carried out by the SolarAOT radiation transfer research station located in southeastern Poland [34]. The seasonal variations in the installation conversion rate are shown in Figure 12. The energy conversion ratio is an indicator of the thermal performance of the installation. The total energy obtained during the year is 4244.8 kWh. The energy obtained from one kWp of installed capacity is 1001.1 kWh/kWp. The largest energy production take place in May and amounts up to 578.7 kWh, while the least produced energy is in Decem ber and goes down to 114.9 kWh. After designing the photovoltaic system and simulation, the installation was per formed on the building. Figure 9 shows pictures of the completed installation. The total energy obtained during the year is 4244.8 kWh. The energy obtained from one kWp of installed capacity is 1001.1 kWh/kWp. The largest energy production takes place in May and amounts up to 578.7 kWh, while the least produced energy is in December and goes down to 114.9 kWh. After designing the photovoltaic system and simulation, the installation was performed on the building. Figure 9 shows pictures of the completed installation. The total energy produced in 2020 was 4810.5 kWh. Converted to one kWp of installed power, the production amounted to 1134.5 kWh/kWp. Compared to the simulation expectations, the amount of the produced energy is higher by 13% (565.7 kWh more than in simulations). The total energy produced in 2020 was 4810.5 kWh. Converted to one kWp of installed power, the production amounted to 1134.5 kWh/kWp. Compared to the simulation expectations, the amount of the produced energy is higher by 13% (565.7 kWh more than in simulations). Figure 11 shows the results of solar radiation measurements in individual months of 2020, carried out by the SolarAOT radiation transfer research station located in southeastern Poland [34]. Figure 12 proves that the energy conversion ratio is not constant all over the year. It seems as if insolation power and seasons impact the installation performance measured through the energy conversion ratio. As can be noticed that the lower is the insolation power, higher is the conversion ratio. The total annual insolation in 2020 was 1198.12 kWh/m 2 , which is 12% higher than the data given in the NASA-SSE database. The seasonal variations in the installation conversion rate are shown in Figure 12 The energy conversion ratio is an indicator of the thermal performance of the installation Figure 12 proves that the energy conversion ratio is not constant all over the year. seems as if insolation power and seasons impact the installation performance measure through the energy conversion ratio. As can be noticed that the lower is the insolatio power, higher is the conversion ratio. The total annual insolation in 2020 was 1198.12 kWh/m 2 , which is 12% higher tha the data given in the NASA-SSE database. It is also possible to calculate the approximate electricity production of a photovoltai installation without the need for expensive simulation software. Produced energy can b calculated from the formula: The seasonal variations in the installation conversion rate are shown in Figure 12. The energy conversion ratio is an indicator of the thermal performance of the installation. Figure 12 proves that the energy conversion ratio is not constant all over the year. It seems as if insolation power and seasons impact the installation performance measured through the energy conversion ratio. As can be noticed that the lower is the insolation power, higher is the conversion ratio. The total annual insolation in 2020 was 1198.12 kWh/m 2 , which is 12% higher than the data given in the NASA-SSE database. It is also possible to calculate the approximate electricity production of a photovoltaic installation without the need for expensive simulation software. Produced energy can be calculated from the formula: It is also possible to calculate the approximate electricity production of a photovoltaic installation without the need for expensive simulation software. Produced energy can be calculated from the formula: where: N-insolation on a horizontal surface (kWh/m 2 ), k-a correction factor that allows the insolation to be converted from a horizontal position to an inclined surface (for an inclination of the panels of 13 • , and an inclination from the south of 15 • , k = 1.09), P PV -photovoltaic installation power, W-performance ratio, STC-standard test conditions, STC = 1000 W/m 2 . The performance ratio determines the level of losses in the photovoltaic installation. Table 1 shows the losses in the installation determined with the BlueSol program. Based on solar irradiation data from the NASA database and the results of solar irradiation measurements (SolarAOT) in 2020, produced energy were recalculated using the formula (1) as shown in example below. For insolation on a horizontal surface N = 32.64 kWh/m 2 (January), k = 1.09, photovoltaic installation power P PV = 4.24 kWp, performance ratio W = 0.863 and STC = 1000 W/m 2 , produced energy calculated from the formula 1 equals: The results are shown in Table 2. A comparison of the value of insolation and electricity production is shown. Using formula (1), it is possible to calculate the energy production during the year with high accuracy, without the need to use specialised software. The difference between the solar calculations from NASA's database and the PV installation design software is 24.8 kWh per year, which is a difference of 0.6%. In the case of calculations using data from measurements at the research station for 2020, the calculated annual energy produced is lower by 31.9 kWh compared to the actual data, which is a difference of 0.66%, as shown in the Figures 13 and 14. Based on the results obtained from the measurements of the amount of energy produced from a photovoltaic installation, it can be concluded that the results obtained from simulation programs and calculations using NASA-SSE databases may differ significantly from the real values. The discrepancy between analytical and experimental data can be caused by the rapidly changing climate, which is difficult to predict in a given year, and the relatively low resolution of insolation measurements, which results in simulation programs assuming the same values even for remote locations whose weather conditions may be different, e.g., due to different topography. In the analysed cases, it is a difference of 13% in the amount of electricity produced. A comparison of the value of insolation and electricity production is shown. Using formula (1), it is possible to calculate the energy production during the year with high accuracy, without the need to use specialised software. The difference between the solar calculations from NASA's database and the PV installation design software is 24.8 kWh per year, which is a difference of 0.6%. In the case of calculations using data from measurements at the research station for 2020, the calculated annual energy produced is lower by 31.9 kWh compared to the actual data, which is a difference of 0.66%, as shown in the Figures 13 and 14. Based on the results obtained from the measurements of the amount of energy produced from a photovoltaic installation, it can be concluded that the results obtained from formula (1), it is possible to calculate the energy production during the year with high accuracy, without the need to use specialised software. The difference between the solar calculations from NASA's database and the PV installation design software is 24.8 kWh per year, which is a difference of 0.6%. In the case of calculations using data from measurements at the research station for 2020, the calculated annual energy produced is lower by 31.9 kWh compared to the actual data, which is a difference of 0.66%, as shown in the Figures 13 and 14. Based on the results obtained from the measurements of the amount of energy produced from a photovoltaic installation, it can be concluded that the results obtained from Conclusions Currently, the photovoltaic energy market is one of the fastest growing renewable energy markets in Europe and in the world. The development of the market was influenced by many factors, including international legal restrictive regulations on greenhouse gas emissions, the increase of the prices of the traditional energy sources, as well as the increase in subsidies for renewable energy projects and the growing ecological awareness of the society. The use of numerical simulation allows optimising the design of a photovoltaic installation and take into account the influence of neighbouring objects or trees on the shading of the panels, the influence of the tilt angle and the selection of all installation elements. Based on the results obtained from measurements of the amount of energy produced from a photovoltaic installation, it can be concluded that the results obtained from simulation programs and calculations using NASA-SSE databases may differ significantly from the real values. In the analysed cases, it is a difference of 13% in the amount of electricity produced. It should also be noted that the databases used by the programs are based on the results of measurements between 1983-2005 [35]. Due to the changing climate, more up-todate databases should be used that would allow more precise determination of electricity production from solar cells. The analysis of failures shows that the photovoltaic companies have to take action to prevent and improve the technical conditions of PV installation operation. Further investigations should be performed for other available databases and measurements for the following years in order to compare the accuracy of other databases. Future work will also include reliability analysis of actual installation as well as decrease the performance of panels with age. The presented analysis constitutes a first step in assessing the risk due to these second family of failures related to solar-energy nature and could be considered as a tool to support decision-making in the process of designing photovoltaic installations and analysing the economic efficiency of investments.
v3-fos-license
2014-10-01T00:00:00.000Z
2002-10-03T00:00:00.000
16602669
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-3-22", "pdf_hash": "c05b3dab27838989e4d2e235c1fec84fb4f3fb08", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41684", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c05b3dab27838989e4d2e235c1fec84fb4f3fb08", "year": 2002 }
pes2o/s2orc
The association between iliocostal distance and the number of vertebral and non-vertebral fractures in women and men registered in the Canadian Database For Osteoporosis and Osteopenia (CANDOO) Background The identification of new methods of evaluating patients with osteoporotic fracture should focus on their usefulness in clinical situations such that they are easily measured and applicable to all patients. Thus, the purpose of this study was to examine the association between iliocostal distance and vertebral and non-vertebral fractures in patients seen in a clinical setting. Methods Patient data were obtained from the Canadian Database of Osteoporosis and Osteopenia (CANDOO). A total of 549 patients including 508 women and 41 men participated in this cross-sectional study. There were 142 women and 18 men with prevalent vertebral fractures, and 185 women and 21 men with prevalent non-vertebral fractures. Results In women multivariable regression analysis showed that iliocostal distance was negatively associated with the number of vertebral fractures (-0.18, CI: -0.27, -0.09; adjusted for bone mineral density at the Ward's triangle, epilepsy, cerebrovascular disease, inflammatory bowel disease, etidronate use, and calcium supplement use) and for the number of non-vertebral fractures (-0.09, CI: -0.15, -0.03; adjusted for bone mineral density at the trochanter, cerebrovascular disease, inflammatory bowel disease, and etidronate use). However, in men, multivariable regression analysis did not demonstrate a significant association between iliocostal distance and the number of vertebral and non-vertebral fractures. Conclusions The examination of iliocostal distance may be a useful clinical tool for assessment of the possibility of vertebral fractures. The identification of high-risk patients is important to effectively use the growing number of available osteoporosis therapies. Background Osteoporosis is one of the most prevalent chronic health conditions. This condition is characterized by low bone mineral density and microarchitectural deterioration of bone tissue leading to increased bone fragility and risk of fracture [1]. It is estimated that approximately 40% of white women and 13% of men 50 years and older will experience at least one clinically recognized hip, spine or distal forearm fragility fracture in their lifetime [2]. These fractures result in physical, psychological and emotional disabilities, and increased pain that can negatively influence quality of life [3,4]. Osteoporosis can be identified early during the course of the disease by diagnostic tests. Bone mineral density measurements provide the single best method for predicting fracture risk [5] but densitometers are occasionally not available and since the pathogenesis of fragility fractures is multifactorial, bone mass is not the only factor that determines risk. A number of factors have been found to be associated with fragility fractures, they include advanced age, positive family history, height, existing fracture, propensity to falls, and postural instability [5][6][7]. Unfortunately, our understanding of risk factors is still inadequate and thus, there is a need for further research. The identification of new risk factors should focus on their usefulness in clinical situations such that they are easily measured, applicable to all patients, and contribute prognostic information that is independent of bone mineral density. The size of the gap between the costal margin and pelvic ridge (iliocostal distance) may be validated to be a surrogate measure for the presence of osteoporosis and/or vertebral fragility fractures and thereby may be a risk factor for future fracture. Hence, the purpose of this cross-sectional study was to examine the association between iliocostal distance and vertebral and non-vertebral fractures in women and men who were seen in a clinical setting. Study design Patient data were obtained from the Canadian Database of Osteoporosis and Osteopenia (CANDOO). CANDOO consists of approximately 10000 patients and involves 8 sites across Canada (Calgary, Saskatoon, Winnipeg, Hamilton, Toronto, Montreal [2 sites], and Quebec City). This database is a prospective registry designed to compile a comprehensive set of osteoporosis-related clinical information [8]. All patients referred to us and seen during the course of routine specialist care were enrolled in CAN-DOO. Patients data are aggregated using anonymous patient identifiers into a centrally maintained, fully keyed and encoded relational database. In particular, the CAN-DOO contains electronically stored information regard-ing basic patient demographics, fracture history, gynecological history, past use of osteoporosis-related drug treatment, drug side effects, past use of corticosteroids and other medications, dietary calcium intake, smoking habits, type and quantity of physical activities, fall history, past medical history and family history including fractures, a self administered osteoporosis health related quality of life instrument, basic laboratory results, and bone density measurements. One database record, with over 400 data fields per patient, is generated for each patient at each clinical visit. For the current analysis, the database was searched for women and men who had iliocostal distance measurements, and who were seen at the Saskatoon site. The Saskatoon location was chosen because it was the only CANDOO site that recorded iliocostal distance values. Iliocostal distance measurements and the number of prevalent fractures Iliocostal distance was defined as the number of cm between the costal margin and the pelvic ridge of a patient, measured in the midaxillary line (figure 1). The measurement was determined by one investigator (WPO) using fingerbreadths (1 finger = 2 cm). All patients were standing during the measurement. Prevalent vertebral and nonvertebral fractures were determined using the CANDOO questionnaire ("Have you ever had any fractures?"). Vertebral fractures may or may not have been confirmed by x-ray. Non-vertebral fractures included the ankle, arm, clavicle, elbow, foot, heel, hand, hip, knee, leg, nose, pelvis, rib, shoulder, sacrum, and wrist. Multivariable linear regression analyses were conducted to determine the relationship between iliocostal distance and the number of vertebral and non-vertebral fractures. Potential confounding variables Potential confounding variables collected from CANDOO included age; height; weight; menopausal status; age at menopause; lumbar spine, trochanter, femoral neck, and Ward's triangle bone mineral density (measurements were made by dual energy x-ray absorptiometry using Hologic or Lunar densitometers); prevalent vertebral fracture status (yes/no); and prevalent non-vertebral fracture status (yes/no); smoking status (never, previously, previously with interruptions, currently, currently with interruptions); family history of fracture (yes/no); number of alcoholic beverages consumed per week (including beer, wine and liquor); number of falls during the last 12 months; dietary calcium intake per day (measured as mg/d and estimated by a food frequency questionnaire); number of minutes spent exercising per week (such as walking, stair climbing, jogging, swimming, bicycling, dancing, skiing and others); current medication use (etidronate, alendronate fluoride, raloxifene, hormone re-placement or corticosteroids); calcium supplement status (yes/no); vitamin D supplement status (yes/no); and comorbid conditions (lung disease, liver disease, thyroid disease, cancer, visual problems that are not corrected by eyeglasses or contacts, osteoporosis, inflammatory bowel disease, epilepsy, coronary disease, cerebrovascular disease, rheumatoid arthritis, diabetes, and kidney failure). Statistical analysis All multivariable regression analyses were conducted separately for women and men. We determined regression coefficient estimates as well as 95% confidence intervals (CI) of the estimates. All factors listed in the potential confounding variable section were assessed separately for their association with the number of fractures. Variables with a p-value < 0.2 were included in the multivariable analysis. Variables with a high degree of multicollinearity were removed. Model selection was determined using a stepwise procedure. If necessary, the iliocostal distance variable was force into the final model. All statistical analyses were performed on a Dell computer using the SAS/ STAT (version 8.1; SAS Institute Inc., Cary, NC, USA) software package. Iliocostal Distance Women In women, univariate regression analysis revealed a statistically significant negative association between iliocostal distance and the number of vertebral and non-vertebral fractures (Table 3). After adjustments were made for confounding variables, multivariable regression analysis showed that iliocostal distance was negatively associated with the number of vertebral fractures (-0.18, CI: -0.27, -0.09; adjusted for bone mineral density at the Ward's tri- Figure 1 Measurement Technique for Iliocostal Distance. Represents iliocostal distance. Iliocostal distance was defined as the number of cm between the costal margin and the pelvic ridge of a patient, measured in the midaxillary line. Men In men, univariate regression analysis did not demonstrate a significant association between iliocostal distance and the number of vertebral or non-vertebral fractures ( Table 4). The iliocostal distance variable remained nonsignificant following multivariable adjustments modelled for the number of vertebral fractures (-0.12, 95% CI: -0.41, 0.16; adjusted for age and the number of fall during the past year) and the number of non-vertebral fractures (0.11, 95% CI: -0.18, 0.41; adjusted for number of falls during year, visual problems that are not corrected by eyeglasses or contacts, and height). Discussion Osteoporosis is under-diagnosed, under-treated and a large number of individuals are unaware that they have this disease. With the emergence of effective treatments it is essential to detect those patients with a vertebral frac-ture and those with at higher risk of fracture. At present, there is no universally accepted policy for identifying patients with osteoporosis. Clinical risk assessment is an important step in identifying individuals at high risk for osteoporosis and fractures. To our knowledge, the relationships between iliocostal distance and the number of prevalent vertebral and non-vertebral fractures have not been previously reported. Our findings indicated that iliocostal distance is negatively associated with the number of vertebral and non-vertebral fractures in women, such that the shorter the distance the greater number of fractures. Iliocostal distance can be used to identify individuals with vertebral fractures and may be an excellent risk factor for future fractures and should be included in a patient risk profile. This measurement is easy to obtain and assess in a clinical setting, can be measured for all patients, and has a high predictive value for prevalent fracture independent of other known risk factors such as bone mineral density. From our clinical experience, healthy adults have an iliocostal distance of approximately 6 cm. Vertebral fractures are a well-recognized consequence of postmenopausal bone loss and are the most common osteoporotic fractures [9]. It is estimated that less than one third of all vertebral fractures are clinically diagnosed [10]. A common explanation is that fractures are frequently asymptomatic and patients who suffer them are not prompted to seek medical attention. Furthermore, physicians may not be identifying prevalent fractures among their patients. The early identification of a vertebral fracture is essential. It has been shown that women who develop a vertebral fracture are at increased risk for an additional vertebral fracture [11] and that 20% of women will experience a subsequent fracture within the one year following the first vertebral fracture [12]. Moreover, there is growing evidence that all vertebral fractures are associated with adverse health consequences. Nevitt et al. [13] have found that back pain, functional limitation and disability days are associated with fractures. Among this large cohort of women 65 years of age and older, patients who sustained fractures were 2 to 3 fold more likely to experience more back pain and disability when performing back-dependent activities of daily living as compared with the unaffected comparison group. Likewise, fracture patients were at higher risk of experiencing limited activity days and days confined to bed. Kado et al. [14] observed that women who developed new fractures over a duration of 8 years had a 23% increased risk of mortality. This study also found a dose response effect such that mortality increased with the number of fractures. Accordingly, it is important that physicians recognize patients at risk for vertebral fracture or patients that have sustained fractures. It is not surprising that iliocostal distance measurements in women were also associated with non-vertebral fractures. Vertebral fractures are early indicators of other osteoporotic fractures [15][16][17]. For instance, it has been shown that women who have a prevalent vertebral fracture have an increased relative risk (RR) of a subsequent fracture at the wrist (RR = 1.4), hip (RR = 2.3), and all non-spinal sites (RR = 1.8) as compared with unaffected women [11]. Our results showed that iliocostal distance values were associated with fractures in women but not in men. The apparent differences between women and men are difficult to explain; however, others have found gender differences in risk factors for increased bone loss in an elderly population [18]. Nonetheless, due to the low number of men (n = 41) recruited in this study (and the low statistical power) caution should be taken in the interpretation of the results. Further research will need to be conducted in men to confirm or dispute our findings. Several features of the study are unique, and thus reinforce our conclusions. For example, all participants were "real world" patients who were seen for osteoporosis in a tertiary care setting and thus represent a homogeneous group. Other strengths included the large sample size of women, the careful delineation of potential confounding variables studied, and the wealth of data available about the study cohort. Nonetheless, our study is not without limitations. The study was cross-sectional in design and partially depended on information obtained by recall. Although adjustments were made for several variables, it remains possible that other, unknown determinants of fracture confound the observed associations. Due to the lack of data, no distinction was made between lumbar and thoracic fractures. Moreover, only one investigator assessed iliocostal distance using fingerbreadth as the measurement device, as such future validation of this useful clinical tool in terms of inter-rater reliability is recommended. Not all spinal fractures were confirmed by x-ray. X-rays were performed only in patients with back pain. Therefore, subclinical vertebral fractures may have developed. As a consequence, the actual association between iliocostal distance and the number of vertebral fractures may have been underestimated. The relationship between iliocostal distance and vertebral fractures should be tested in those patients with subclinical fractures. Conclusions At present, only a small number of patients at high risk for fracture are currently recognized. Indeed, vertebral fractures often do not produce symptoms, so that many individuals with fractures will not seek medical attention for the problem. However, all vertebral fractures, whether symptomatic or radiographically identified, are associated with increased mortality and morbidity. The challenge for primary care physicians is to prevent, diagnose, and treat osteoporosis as early as possible. Thus, identification is the first step in osteoporosis management. The examination of iliocostal distance may be an excellent clinical opportunity to identify osteoporotic individuals for referral for diagnosis, preventive counseling and management. The identification of high-risk patients is important to effectively use the growing number of available osteoporosis therapies. Longitudinal studies will need to be conducted to determine the association between changes in iliocostal distance measurements and fractures.
v3-fos-license
2022-11-23T14:59:42.467Z
2022-11-22T00:00:00.000
253764609
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10806-022-09896-1.pdf", "pdf_hash": "e770f308fc864563b657f7ed877f9ec334432100", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41686", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "1bb11243c2864e70ffdde769765a7e83fe3c922a", "year": 2022 }
pes2o/s2orc
The Moral Pitfalls of Cultivated meat: Complementing Utilitarian Perspective with eco-republican Justice Approach The context of accelerated climate change, environmental pollution, ecosystems depletion, loss of biodiversity and growing undernutrition has led human societies to a crossroads where food systems require transformation. New agricultural practices are being advocated in order to achieve food security and face environmental challenges. Cultivated meat has recently been considered one of the most desired alternatives by animal rights advocates because it promises to ensure nutrition for all people while dramatically reducing ecological impacts and animal suffering. It is therefore presented as one of the fairest means of food production for the coming decades, according to utilitarian arguments. However, food security, environmental concerns and animal welfarism guided by a short-term utilitarianism could have techno-optimism bias and could result in some forms of oppression such as anthropocentrism. I argue that there are still deep-rooted moral issues in food systems that are not addressed primarily by lab-grown meat, mainly derived from a loss of sovereignty. Food practices developed in high-tech labs with artificial interventionism constrain the ability of living entities (that are used as food) to flourish on their own terms. This paper aims to explore how sovereignty entitlements for humans and nonhumans are often overlooked by advocates of cultivated meat and the moral challenges it may pose. Accordingly, a more than utilitarian approach framed by ecological and republican justice is proposed here to shed light on some pitfalls of food chains based on cellular agriculture. Introduction Food and climate change are closely linked. Agricultural and farming processes have the potential to influence the planet's climate and, in turn, climate affects food production. This feedback has prompted countless researchers to investigate which foods or production methods have the least environmental and climate impact, and how to prevent climate from damaging food production. Intensive and oil-dependent methods have provided an apparent alternative in light of the Malthusian concern to ensure food for a human population that is growing almost exponentially. By means of techniques such as the use of chemicals, mechanization of processes and irrigation of the land it has been possible to increase the productive capacity of the land by replenishing soil with minerals. However, maintaining this rate of food production to meet the growing demand without causing environmental degradation remains a challenge. The enormous use of water and energy required by intensive livestock farming, for example, is unsustainable in the long term (Pluhar, 2013). Since the second half of the 20th century, intensive livestock farming has profoundly transformed the socioecological metabolisms of many environments and greatly changed our lifestyles and the ecosystem rhythms of the biosphere (Reisinger & Clark, 2018). In addition to the above, since the recent Covid-19 pandemic there has been a growing concern about zoonotic diseases. This places the spotlight on industrial livestock, who live in conditions of poor hygiene and overcrowding, which can favor the appearance of such mutations and the spread of viruses that are also harmful to our societies (Espinosa et al., 2020). Intensive farming, where the interface between human and nonhuman animals is so narrow, makes viral jumps among species easier. Given this scenario, we may consider the alternative of cultivated meat, also known as cultured, in vitro, lab-grown, artificial or synthetic meat, because it is presented as the long-awaited solution that will maintain expensive lifestyles without entailing a significant cost to the planet, other sentient beings and our future generations. This raises the question: Why choose such a high-tech process to obtain meat products over other alternatives which have no massive industrial effects? Despite the survival of some traditional practices like extensive livestock farming, which is dedicated to satisfying people's taste for meat and strong cultural traditions, it has unresolved underlying problems. On the one hand, the planet does not have enough surface area for extensive livestock rearing to sustainably supply current trends in diets rich in meat proteins (Hayek et al., 2020). Should this practice be chosen as a replacement for intensive livestock farming, there would have to be a drastic reduction in meat consumption. This, in turn, would lead to the other alternative of changing dietary preferences towards vegetarianism or veganism, because either of these would potentially free up around 76% of the land dedicated to agriculture and livestock (Poore & Necemeck, 2018). What is more, extensive livestock farming includes the slaughtering of sentient beings, which poses a serious moral problem from a non-anthropocentric ethical perspective. What about those people who prefer not to give up meat but are quite reasonably concerned about maintaining good planetary health from an anthropocentric perspective? Many food choices are not logical reasoned actions, but rather automatic deci-sions relying on heuristic processing and heavily influenced by contextual patterns (Stubbs et al., 2018). Despite knowing the detrimental effects of meat on the biosphere, many people, environmentalists among them, find it difficult to sufficiently reduce their meat consumption due to a defeatist perception of individual responsibility (Scott et al., 2019). Faced with this complex scenario, where food can be conceived as an essential leverage point for a just transition towards sustainability but at the same time a difficult habit to transform, the option of cultivated meat secures our attention. However, I will argue that the much-vaunted positive consequences of cultivated meat could have some moral pitfalls that should not be overlooked. In particular, I will address short-termism and the sovereignty value for humans and nonhumans. The research question this paper explores is as follows: what moral challenges may emerge if we rethink cultivated meat from strictly utilitarian perspectives? Cultivated meat is usually defended with arguments centered around its consequences, especially in the short term, and advantages regarding the cost-benefit relationship in comparison to an agribusiness model (Dutkiewick & Abrell, 2021). Although utilitarian reasoning may have moral justification in a quantitative assessment of cultivated meat, it may not address concerns over which qualitative parameters are included within the quantitative balance. That is, it does not sufficiently serve to critically discuss which moral values are being considered and which are not. Furthermore, it is of no help when rethinking who to include in the decision-making process and who to exclude from it, or in questioning the structural hierarchies of power. Thus, we are left to wonder which gaps may be unaddressed by utilitarianism, such as care for sovereignty, and how more-than utilitarian approaches may be helpful in this regard. It is important to inquire beyond consequence-focused arguments in support of cultivated meat, because producing food is not just about obtaining a more efficient end product, but rather involves a whole set of dynamic and interdependent relationships. And diverse values coexist in these relationships that, in order to be morally respected in a plural and non-dominated way, would need to be embraced by more than utilitarian approaches. This leads me to explore a republican and ecological perspective of justice. Such approaches provide visibility regarding who participates in the decision-making process when it comes to a fairer means of sourcing food in a context of socioecological crisis, who is made invisible, and which moral values cannot by substituted by beneficial outcomes. The main aim of this paper, then, is to discuss why utilitarian arguments should dialogue with arguments framed from an eco-republican justice to address the moral pitfalls of cultivatedmeat. 1 It is structured as follows: First, some benefits of cultivated meat will be mentioned, followed by the long-term scalability consequences that need to be considered in a stronger utilitarian defense of this food strategy. In the next section, an eco-republican perspective of justice will be presented, not as an alternative to replace utilitarian arguments for cultivated meat, but to complement the ethical analysis and provide a more in-depth analysis of some issues not sufficiently addressed by consequence-centered lenses. Republicanism raises objections to the structures of domination that result from dependence on high technologies. Ecologism addresses oppression via a critique of the prevailing anthropocentrism in capitalist societies and a concern for the more-than-human world. After presenting eco-republican justice as a helpful approach for constructing normative judgments on cultivated meat, I will specify how it can help detect those scenarios where there is a loss of sovereignty value. Among humans, sovereignty can be lost when opportunities to participate in the food system are reduced. However, there could be more local alternatives that offer community integration in the process of in vitro meat generation. Regarding nonhumans, I will discuss sovereignty more broadly and draw on some contributions from the capabilities approach, reasoning that cultivated meat may cause loss of sovereignty if animals' capabilities to exercise control over their own bodies and their territorial environment and to relate to other individuals are not respected. And finally, I will discuss how eco-republican justice leads to a dialogue with other normative frameworks and a decolonial attitude in which the cultural acceptance of developing synthetic meat from only some animal species is epistemologically questioned. Cultivated meat from a Utilitarian Point of view Since the first laboratory-created hamburger was presented and tasted in 2013, the funding of research with animal cells to create this synthetic food and public expectations surrounding it have greatly increased (Treist, 2021). Investigating artificial meat is neither technologically simple nor cheap, the high cost being a factor that slows down the process (Rubio et al., 2020;Stephens et al., 2018). However, proponents of cultivated meat expect production to become cheaper if the consumption of artificial meat becomes widespread or private, philanthropic, and public sector investors decide to put more money into it. It is often argued that cultivated meat production would significantly alleviate some of the major environmental problems associated with intensive livestock farming, such as animal suffering, overexploitation of land and water, methane emissions, deforestation, and fertilizer, pesticide, and fossil fuel abuse (Rubio, et al. 2020;Post, 2012). In theory, it presents advantages that minimize injustices towards animals, the environment and people, meaning it has many benefits from a utilitarian point of view (Pluhar, 2010;Hopkins & Dacey, 2008). With cellular farming, far less animal slaughter would be required, or even none at all, to manufacture meat products compared to industrial livestock farming. Furthermore, its processes of technological intensification in cell culture laboratories would entail a "land sparing" strategy -as it is usually called within the field of conservation biology (Mertz & Mertens, 2017)-by means of which much more land would be released than is currently maintained for livestock. This would have the benefits of leaving more natural surface for wildlife, which constitutes another compelling reason in its favor for those concerned about animal suffering (Schaefer & Savulescu, 2014). Cultivated meat also promises to address environmental challenges by conserving land and water, preserving habitat, reducing greenhouse gas emissions, and preventing manure pollution and antibiotic overuse (Post, 2012;Tuomisto & Teixeira de Mattos, 2011). Proponents of cultivated meat have suggested that it generates lower emissions per unit of meat produced than current livestock production, and even fewer if renewable sources are used to run bioreactors and ensure global scalability of the product (Simon Nobre, 2022). In addition, cultivated meat offers a solution to human population growth and the trend of increasingly preferring an animal-based diet (Post & Hocquette, 2017). It is expected that the artificial reproduction of cells will generate surplus protein food for a population of 10 billion people, while requiring fewer resources and animals than current industrial livestock farming (Bryant & Barnett, 2020;Post & Hocquette, 2017). In sum, then, the watchwords for this transition to cellular farming are that it would increase animal welfare, reduce environmental costs and sustain human health by providing food for the world's population. All these potential benefits are often considered sufficient to at least support the cultivated meat process from a consequentialist point of view. However, despite the possibility that it could be produced on a large scale, there would still be epistemological and ethical problems. There are some flaws in the utilitarian approach used to support cultivated meat, some of which can even be discussed from a consequentialist point of view. The key issue based on utilitarian arguments is to weigh how much suffering is saved by generating cultivated meat (Dutkiewick & Abrell, 2021). And here the answer basically depends on three main conditions: the time scale used to calculate the trade-offs; what we understand by suffering; and who is recognized as having the basic capacity for suffering. The second and the third variables appeal to more than utilitarian approaches, which is why I will take them up in the latter sections after presenting the eco-republican perspective. But the first, concerning timescale, relies on consequentialist analysis and is intertwined with different nuances of utilitarianism. I therefore mention timescale here as a potential determining factor in understanding the environmental consequences that the defense of cultivated meat based on utilitarian arguments might face. The epistemological methodology used to defend the beneficial outcomes of cellular agriculture may come up against some challenges when calculating energy savings and environmental impacts. To the end discussed above, it would be appropriate to compare at least two temporal scenarios, short-term and long-term, given that the resulting balance may well differ to some extent. Regarding the supposed environmental benefits of cultivated meat, it must be considered to have less environmental impact than beef and perhaps pork, but more than chicken and plant-based proteins (Lynch & Pierrehumbert, 2019;Smetana et al., 2015). While the technology used in the process has significant scope for innovation that could reduce energy requirements below these assessments and offer better environmental results in the long-term (Stephens et al., 2018), it cannot currently be deemed an environmentally better solution than directly advocating for a reduction in meat consumption. In fact, cultivated meat still faces challenges such as using low-cost non-animal growth cultures and designing bioreactors (Tuomisto, 2019). In addition, the Jevons paradox should be considered, which posits that stimulating technological efficiency in the use of resources can lead to their greater consumption, thereby canceling out energy and environmental savings (Polimeni et al., 2009). In other words, expanding production of cultivated meat could have a "rebound effect", in which total energy consumption would grow due to increased demand resulting from the existence of cheaper products and greater efficiency during the process. If the promotion of synthetic meat proves convincing to the public and succeeds in the marketplace thanks to rhetorical strategies, this may lead to fewer people opting for plant-based foods (even if they are more sustainable). Although cultivated meat could reduce the overall amount of animal suffering on a short time scale, on a longer one it could perhaps hit a ceiling, at which point it could no longer be significantly reduced. Lately, the perspective of long-termism has been included within utilitarianism, so a consequentialist defense of cultivated meat should also consider it. Eco-republicanism to Confront the Potential Oppression of Cultivated meat Even if cultivated meat proponents embrace long-termism, arguing utilitarian reasons, they still focus on and prioritize outcomes over processes, and some moral-based assumptions are perpetuated without necessarily being rethought. The questions posed earlier were: what do we understand by suffering, and who is recognized as having the basic capacity for suffering? Relying solely on utilitarian criteria to defend cultivated meat would not be sufficient to properly address some moral pitfalls deriving from such questions. All food chains entail several profound issues that need to be dealt with. I argue that, in addition to criticisms of cultivated meat that may come from the long-term utilitarian perspective, there are other controversial points that deserve to be addressed from a republican and ecological justice approach. Power and domination should also be studied to determine which collectives are privileged and which remain under dependency through cultivated meat production, including human and nonhuman animals in this moral inquiry. Two approaches can be considered in respect of this: the republican and the ecological. The Republican Perspective Preventing Social Oppression by the high-tech World The utilitarian perspective of food security in the Anthropocene context (Noll, 2019) considers the advantages resulting from calculating the large amount of food that can be produced by cultivated meat thanks to reduced energy and environmental costs. Food production, resource consumption and certain environmental impacts are more or less measurable. The distribution of products, and more specifically in our case food, that would accompany utilitarian philosophy focuses on the collectivist metric of achieving the greatest good for the greatest number of individuals (Bentham, 1996(Bentham, [1789). In this case, the sacrifice of individual rights for the public good, that is, increased food production, could be justified. However, the resulting utilitarian balance of blindly prioritizing food supplies over the respecting of certain inviolable values and rights could unjustly perpetuate oppressive relationships. The republican perspective goes beyond concerns about quantitative distributions mediated by utilitarianism, because it is more sensitive to power and domination relations (Pettit, 1997). The republican position considers participation in the public sphere to be a constitutive element of citizenship (Lozano-Cabedo & Gómez-Benito, 2017). Thus, republicanism is committed to determining which collectives are privileged and which remain under dependency through cultivated meat production and distribution. Cultivated meat is a product from the high-tech world, and restricting production to this technology alone perpetuates the exclusion of key players aiming to create a sustainable food future. Farmers and land sovereigntists should be important players in our food systems (Borras et al., 2015), since they have strong local and traditional knowledge of the land (Engdawork & Bork, 2015) and of how to grow food while keeping the soil healthy (Rhodes, 2012). Nevertheless, shifting food systems to cultivated meat and high-tech processes could aggravate the loss of traditions based on respectful interactions with ecosystems and experience in obtaining food in a sustainable and regenerative way. Some authors have pointed out that the focus on technological solutions to food security unfortunately "minimizes the need for difficult ethical reflection on our industrialized way of life in relation to either the poor or to the natural environment" (Rush, 2013). Industrializing systems and enhancing the reliance on technologies often cause further damage to rural collectives and people who decide to live without so many technological dependencies, or who cannot afford to live according to this expensive system of machines, devices and ultra-processed products. In this developed model, cultivated meat might contribute to marginalizing some collectives and make them more vulnerable. While there is still domination among nations, communities and species, and power distribution is the hands of a small group of individuals, externalities will remain invisible and there will be no empowerment or food sovereignty. Not everyone is able to develop cellular agriculture, because prior specific knowledge of biotechnology, laboratories full of machinery, competent teams and industrialized systems are all required to this end. Consequently, the conditions to produce and obtain food would be extremely limited to a privileged sector of society and distributive procedures would broadly depend on this. The technological, technical and economic restriction to participate in the in vitro processes of food generation is far from a real democratization of food systems and close to a discrimination that further disconnects people from the land, nonhuman animals and self-sufficiency. The Ecological Perspective Averting Anthropocentric Domination The loss of food sovereignty and the different forms of social oppression that could arise as an unforeseen consequence of cultivated meat is a republican critique that requires further examination if we are to evaluate the moral validity of developing and producing this foodstuff. The main contribution of ecological justice is aligned with the republican concern over injustices based on domination: applying the concept of justice to the nonhuman world (Dobson, 2006). Although utilitarianism may be a helpful perspective in applied ethics and justice, it may also be based on moral ontologies that need to be rethought. The ecological perspective shines a light on the weakening and even fragmentation of anthropocentric ontologies by including the nonhuman world within the moral scope (Donoso, 2020). Regarding concerns over food justice, an ecological view aims to recognize which more-than-human entities are being instrumentalized and severely exploited in the food production chain. Distributive theories of ecological justice broaden the spectrum of beings considered as beneficiaries of the fair distribution of resources (Baxter, 2005). Recognition and participatory theories of ecological justice delve deeper and ask how beings and communities should be included in justice and how humans might listen to them without imposing our own voice (Dryzek, 1995). From this perspective, it is often assumed that nonhuman nature and even ecosystems have a certain agency worthy of mention. The use of agency as a morally relevant characteristic is somewhat more problematic, however, because its scope can be very narrow if it is linked to ideas such as cognitive intentionality or self-awareness. That being said, agency does not imply assuming a superstitious personification regarding nonhuman nature: even if it lacks the capability for rationality, it could hold moral rights if classic reciprocity between rights and duties is abandoned (Curry, 2000). Many animals and ecosystems do not express themselves via cognitive strategies or make claims based on reasons like humans. They evolve in line with the laws of physics, chemical reactions and biological interactions. Rationality or sentience are not a necessary condition for an entity to become a victim of injustice, but other capacities, such as resilience, can be measured to discover how some entities change patterns when under stress conditions (Kortetmäki, 2017). The central issue in evaluating whether cultivated meat is permissible from an ecological perspective essentially boils down to a question of whether nonhuman entities form part of food decision-making and, if so, in what way. This requires learning how to listen to other voices in food regeneration processes and include their autonomy or preferences. Thus, the ecological perspective embraces caring for nonhuman entities involved in food chains and trying to represent their own sovereignty. In accordance with this approach, rather than an end product, food may be considered an integral part of an interdependent chain. This suggests that cultivated meat should be analyzed by considering how humans respect nonhuman nature during the food generation process and ensuring that their treatment does not slip into forms of animal disenhancement (Thompson, 2020: 355-358). Preserving the Value of Sovereignty at Human and Nonhuman Scale Both perspectives, republican and ecological, have a common concern for the value of sovereignty. The former mainly addresses human sovereignty (Lozano-Cabedo & Gómez-Benito, 2017) and the latter nonhuman sovereignty (Donoso, 2020). Despite both being presented separately, together they may help rethink those interfaces where human and nonhuman capabilities are intertwined in a world partially shaped by food. Consequentialist analyses focus mainly on how much collective well-being is gained through the procedure of producing cultivated meat, assessing the extent to which it is worth promoting this food transition, even if it implies accepting some trade-offs. The key argument of the thesis advocating for the utilitarian advantages of cultivated meat is that, from a basically quantitative approach, the benefits outweigh the costs. However, I oppose the view that numerical calculations deriving from the costbenefit mantra will suffice to morally evaluate cultivated meat. Cultivated meat has some moral pitfalls in addition to the beneficial consequences it may generate. The logic that leads utilitarianism to measure the overall calculation can overshadow the particularities of each form of life involved in the process of generating and receiving food. There are values whose loss cannot be offset by the gain of other values. These are interchangeable minimums of justice. Specifically, here I address the value of sovereignty in relation to two types of sentient beings: human and nonhuman animals. It is worth noting that I understand sovereignty in a broad sense, as freedom, autonomy, or within the Senian meaning of capability: having the opportunities to be or do what one finds valuable in order to flourish (Sen, 1999). From this perspective, freedom of choice or sovereignty has not only an instrumental value (it is valuable as a means to an end), but also an intrinsic one; that is, it is valuable in itself, for the well-being of an individual. The ability to decide how one prefers to flourish is directly related to an individual's quality of life. And if sovereignty is not respected, then slow and silent suffering may ensue. This need not necessarily be associated with bodily pain, as it can also be cognitive or derive from a limitation in basic capabilities to flourish. Food strategies aimed at addressing climate challenges should not result in a loss of sovereignty, which would be a process linked to suffering (Noll & Murdock, 2020). Any proposed eco-authoritarian alternative should be reviewed and discussed, as it is detrimental to the ability to flourish according to each individual's own conception of a good life. Faced with the current socioecological crisis characterizing the Anthropocene and the increasingly demonstrated correlations between animal-protein diets based on industrialization and aggravation of this crisis (Reisinger & Clark, 2018), a growing number of people advocate for renouncing some of the more democratic ideals and undertaking more autarkic measures. Thus, although ecoauthoritarianism remains a marginal political movement, it is gaining adherents as the global environmental context worsens (Man & Mainwright, 2017). Is this fair though? What about the sovereignty of the subjects concerned? Here, in discussing the validity of a fully ecoauthoritarian system that would impose a sort of green Leviathan, I am not rejecting the idea of a hierarchy in the decision-making process with regard to food systems. What I am criticizing is the fact that applying utilitarian criteria to efficiently obtain outcomes advantageous to the majority may end up being the only approaches considered, given their weak commitment to inter-individual differences. It is important to focus our critique on the number of individuals participating in the political process that makes decisions in a quantitative sense about, for example, food. But it is also crucial to discern who are the most involved and influential subjects and who, on the other hand, are being left invisible and ignored. Claiming food Sovereignty for Humans Accelerated climate change affects the capabilities and sovereignty of both human and nonhuman beings. But food practices developed in response to the environmental crisis also affect freedom in a variety of ways. First, I will focus on the loss of sovereignty for human beings. In the debate on how to protect sustainable human development, it seems that many discourses tend to point primarily towards the protection of food security. Having food is a basic right, because it is a necessary condition for being able to flourish with dignity. Without being well nourished and healthy, one will hardly be able to decide how one desires to be or continue to function. Hence, it is reasonable to aim to ensure that we all have food. But it is also reasonable to be concerned about how we can obtain that food, and about who is included in the food production and supply systems. That is, to attend to sovereignty from a human-centered morality. One aspect of consumer sovereignty is voluntariness (Mepham, 1996), but to ensure voluntariness in the choice of food can at times be complicated. When a consumer buys a food product, this does not imply that there is a consent to buy exactly that product. Consumer preferences may be conditioned by the options available at the grocery store, for instance (Röcklinsberg, 2006). Thus, a republican perspective would propose having the sovereignty to decide no longer what we want to buy but what we can buy. Having the choice to be part of the decision-making process about which food is sold, at what price and under which conditions, should be a legitimate claim to justice (Höglund, 2020). Losing the right to food sovereignty implies losing a value that cannot be compensated for by gaining other benefits, such as receiving certain resources or food products (Noll & Murdock, 2020). On the one hand, we should be able to know how food is made throughout its production chain. This would imply having a more in-depth understanding of our relationship with the nonhuman animals used in this process, as well as the resulting impacts on the land and environment. On the other hand, we should at least have the option to participate in food-making processes in order to be self-sufficient. I think that the recent Covid-19 pandemic has awakened a legitimate interest and concern in many people to become more resilient and not to depend on transporters and supermarkets to be able to have enough food. Hence, a good number of people have begun to set up small gardens and grow their own food in the gardens and balconies of their homes (Sofo & Sofo, 2020). Seeking this food autonomy and recognizing the interdependencies generated during food production are not currently targets contemplated in the transition to cultivated meat. Some authors have suggested a hypothetical scenario known as the "pig in the backyard" as an alternative to preserve food sovereignty and self-sufficiency during the production of cultivated meat (Van der Weele & Driessen, 2019). According to these authors, we should consider the possibility of cultivated meat being produced by cell extraction from a pig in the backyard of our homes or our communities. This possibility would defeat objections that in vitro meat is neither local nor conducive to food sovereignty. However, at present, with large biotechnology companies leading the way in cultivated meat production, this still seems a distant scenario. In addition, even if this process were to ensure sovereignty for humans, it remains to be seen how it might ensure sovereignty for nonhumans. Claiming Sovereignty for Nonhumans In the case of humans, I have focused on addressing the value of sovereignty in relation to what I have called the food concern, understanding this as the freedom to produce and manage one's own food, because I believe that there are already sufficient arguments for sovereignty to be considered an inviolable value. For nonhumans, however, I will not discuss sovereignty in relation to food so much, as I will approach it from a broader consideration. I will appeal, above all, to their capabilities to exercise control over their own bodies -to have bodily integrity-, to exercise control over their surrounding environment, and to relate to other members of their own and other species (Nussbaum, 2006). In those animals used to produce synthetic food, the biopsy that accompanies the process of cultivated meat generation certainly need not cause any apparently direct disrespect for the animal's sovereignty. The extraction of cellular tissue is painless and does not have to damage its physical integrity. But what about the culture medium? Cultivated meat requires fetal bovine serum for food growth in the laboratory, a culture based on calf stem cells. After a mother cow has been slaughtered and gutted, her uterus, which contains the fetus, is removed. Only fetuses older than three months are used, otherwise the heart is too small to perforate (Lanzoni et al., 2022). Although there are already tests with other culture media that do not require animal embryonic cells, such as algae, the most widely adopted option which offers the best results remains the use of mammalian fetuses (Hocquette, 2016). It is here that the variable of which subjects are recognized as capable of suffering and thus deserving of moral consideration comes into play, establishing a dialogue with the question of what is understood by suffering. If the fetuses used during in vitro meat processing are understood as sentient beings, with intrinsic value and with a whole life ahead of them to potentially flourish, then the cost-benefit scale of cultivated meat may not be as advantageous as often presumed. Here, the prior adjudication of moral status to some individuals or entities -or not-may influence the utilitarian advantages that can be appreciated in cultivated meat. Therefore, further interdisciplinary research on animal minds and sentience is required in order to dig deeper into the moral assessment of this variable conditioning the cost-benefits of cellular agriculture. Returning to the ethical analysis of the biopsy process, there are yet other challenges that need to be addressed in order to preserve sovereignty. Depending on the location of the biopsied animals, there may be potential damage to their ability to develop freely in an environment and in relation to their fellows and other species. If, for example, in order to facilitate and accelerate cell extraction, the animals were forced to be in a reduced space and in conditions of overcrowding, then a moral problem would arise, as their territorial autonomy and freedom to interact with other individuals would not be respected. As Donaldson and Kymlicka pointed out (2011), animals should have the political right to enjoy the sovereignty of their communities and territories. Even so, we could imagine an extended version of the hypothetical "pig in the backyard" proposal and assume that the animals from which we would extract the cells would be placed in large community yards, such as some animal sanctuaries or reserves, where they would live in coexistence with other species and could have ample freedom of movement. Preventing the biopsy from causing harm to the animal's basic capabilities would seem to avoid the moral pitfall that cultivated meat might cause from a non-anthropocentric perspective. All that being said, as mentioned above, the production of cultivated meat also requires fetal bovine serum for food growth in the laboratory, a culture generally based on millions of calf stem cells (Hocquette, 2016). This poses another ethical challenge. Beyond moral discussions regarding the possible intrinsic value of the bovine fetus, it is worth asking to what extent an animal, generally a cow, from which stem cells are extracted to be used as a culture medium to produce in vitro meat, is not suffering an aggression against its physical integrity. For example, some deontological ethics (Francione & Garner, 2010) would argue that insofar as the cow fertilized to extract stem cells from it is a subject capable of experiencing its own life, it has intrinsic value and, therefore, deserves rights that would hardly grant this non-consensual violation of its body. In focusing on the balance of results that generate less suffering, utilitarianism is not so sensitive to respecting those moral parameters that should be inviolable. In other words, everything can be sacrificed if it generates the greatest benefit for the greatest number of individuals. As we can see here, however, from a deontological or capabilitarian ethics, this mechanism of moral deliberation is not sufficient to address some of the pitfalls of cultivated meat. A dialogue with other ethical perspectives is needed, which integrates a scheme of values that cannot be reduced to quantitative metrics (Dutkiewick & Abrell, 2021). And although cultivated meat is not necessarily a utilitarian practice, its defense may be guided by utilitarian arguments and motivations; it is therefore important to be critical and watchful over the challenges that might arise. Critical Distance when Rethinking the Sovereignty Value It should be noted that cultivated meat does not lead to a loss of absolute sovereignty for nonhuman animals. Actually, it would help to decolonize a large part of the systematic exploitation of the meat industry, guaranteeing respect for many of the basic capabilities of millions of them. In addition, whether a method of intensification or a land sparing strategy, it would allow the liberation of vast natural areas for many species to reappropriate their territories, which is fundamental for respecting one's own sovereignty and facilitates options for good flourishing. In comparison with the mainstream meat industry, cultivated meat would therefore help regain a fair amount of sovereignty for nonhuman communities. Nonetheless, in contrast and in relation to a hypothetical scenario in which we would all be vegetarians or vegans -producing our plant-based proteins through ecological agriculture-it may be worth asking whether cultivated meat would not be generating losses in the sovereignty of nonhuman animals. This comparison of scenarios may deserve further discussion (Santo et al., 2020). The above being said, there are some closing remarks that I consider important to bear in mind when criticizing the narrowness of a solely consequentialist defense of cultivated meat from an eco-republican perspective. In previous sections, I have presented the two approaches of republicanism and ecological as being separate, even if I do call for the need to turn to both when morally analyzing cellular agriculture and, in particular, regarding the value of sovereignty. But cultivated meat raises challenges that should be addressed jointly from an eco-republicanism standpoint, since moral concerns from both frameworks are intertwined. This can be illustrated, for instance, by examining what we decide to be food or not. Such a decision affects both human and non-human sovereignty. According to republicanism, the freedoms of different societies and communities should be equally respected in deciding which animal can be chosen to reproduce its meat in vitro. From the ecological justice point of view, animals' capabilities to flourish with dignity should be equally respected. Therefore, the eco-republican perspective is concerned with both cultural diversity and thoughtless anthropocentric exploitation. The tandem of republican and ecological justice could introduce a new lense with which to rethink moral evaluations of cultivated meat focused only on its consequences, whether short-or long-term. Merely tackling global harms and the reduction of negative consequences -to nonhuman animals, the environment and human beings-entails a moral dissonance when some animal species commonly exploited as food in Western countries -like cows, pigs, chickens and fish-are used to produce cultivated meat, and there is a rejection of using other animals not normally eaten in the culture such as cats, dogs or rats. This has already been discussed in some other studies (Bryant et al., 2019). If it is accepted that cultivated meat be produced from sentient animals like cows and pigs, there should in principle be no moral reasons, other than speciesist prejudices, to exclude the use also of cultivated meat from dogs and other animals. The "non-normalness" of the latter for some societies could raise issues over epistemological colonialism about how we understand cultivated meat on a global scale. This suggests that, while debates about cultivated meat from "unusual" species, and even "ethical cannibalism" (Milburn, 2016) are philosophically interesting, products from non-traditional meat species in Western countries are unlikely to find a large consumer base (Bryant & Barnett, 2020) and are therefore of little practical relevance. Why may this be relevant from a capabilitiarian and eco-republican justice applied to cultivated meat? Because it shows that, for food ethics, the balance in consequences -between the "bad" and the "good"-is not always a quantitative assessment of trade-offs. Sometimes food ethics requires rethinking which consequences may be morally accepted, which are less permissible, and which are unjustified. This leads to plural and qualitative assessments of what values like sovereignty mean and for whom. The wide range of conceptions each society holds regarding animals reflects how cultural plurality can condition the fairness and global acceptance of cultivated meat, so that non-dominant perspectives must be considered in order to avoid some potential epistemological biases and to not perpetuate colonial preferences. Conclusion It can be difficult to imagine a scenario in which industrial meat based on intensive livestock farming is completely left behind and human nutrition requirements are still completely covered in a changing climate context. But cultivated meat seems to represent a move in this direction. It may have numerous beneficial consequences in the short term, such as causing less animal suffering, reducing some environmental impacts and ensuring food security, but it still holds some moral pitfalls. The costbenefits from a utilitarian point of view and consequentialist justice need to be discussed in greater depth. Utilitarian approaches focused on analyzing cultivated meat should consider the long-term consequences, for example. Furthermore, although cultivated meat may bring more benefits than costs compared to the current agribusiness models, there are still some missing values which should be taken into consideration, like sovereignty. Here, the objections provided from a theory of justice based on eco-republicanism are significant. Domination and power relations are not properly addressed by the cultivated meat agenda, and yet they become crucial in an ethical assessment of food systems. Preserving food sovereignty, participatory processes and recognizing the integrity of beings and environments affected by food production should be ethical targets addressed by proponents of cultivated meat. It is true that cultivated meat may contribute in a high degree to ensuring food security, and even promise better consequences for respecting sovereignty compared to the current agro-industry system. But I consider it ethically important to rethink what we mean by sovereignty and for whom it should be recognized. Strictly utilitarian arguments are rarely concerned with this. In contrast, an eco-republican framework is more likely to question how and why this background has been shaped, since one of its purposes is to embrace thought diversity and avoid domination patterns. As an approach it is related to decolonial epistemologies when it comes to choosing which animals we may use to reproduce their flesh in vitro, without resulting in the unjust oppression of other cultural criteria and other nonhuman beings. to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
v3-fos-license
2018-06-14T13:28:58.174Z
2018-05-21T00:00:00.000
49190161
{ "extfieldsofstudy": [ "Medicine", "Computer Science", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-27258-8.pdf", "pdf_hash": "acf9683ed147b99315b52e855cc5ff666f9d8bc1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41689", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "sha1": "acf9683ed147b99315b52e855cc5ff666f9d8bc1", "year": 2018 }
pes2o/s2orc
A data mining paradigm for identifying key factors in biological processes using gene expression data A large volume of biological data is being generated for studying mechanisms of various biological processes. These precious data enable large-scale computational analyses to gain biological insights. However, it remains a challenge to mine the data efficiently for knowledge discovery. The heterogeneity of these data makes it difficult to consistently integrate them, slowing down the process of biological discovery. We introduce a data processing paradigm to identify key factors in biological processes via systematic collection of gene expression datasets, primary analysis of data, and evaluation of consistent signals. To demonstrate its effectiveness, our paradigm was applied to epidermal development and identified many genes that play a potential role in this process. Besides the known epidermal development genes, a substantial proportion of the identified genes are still not supported by gain- or loss-of-function studies, yielding many novel genes for future studies. Among them, we selected a top gene for loss-of-function experimental validation and confirmed its function in epidermal differentiation, proving the ability of this paradigm to identify new factors in biological processes. In addition, this paradigm revealed many key genes in cold-induced thermogenesis using data from cold-challenged tissues, demonstrating its generalizability. This paradigm can lead to fruitful results for studying molecular mechanisms in an era of explosive accumulation of publicly available biological data. In this report, we introduce a paradigm to integrate data collection and data analysis for mining key factors in specific biological processes (Fig. 1). To demonstrate the power of our data processing paradigm, we evaluate key factors of two applications in skin biology and energy homeostasis. The epidermis of skin mediates various functions that protect against the environment, such as microbial pathogen challenges, oxidant stress, ultraviolet light, chemicals, and mechanical insults 11 . Therefore, it is critical to understand mechanisms of epidermal development to develop new treatment for human skin diseases 12 . Our paradigm predicts key factors in epidermal development by collecting related datasets and integrating the information. A fraction of genes are annotated in Gene Ontology (GO) or have strong functional validation based on gain-/loss-of-function studies 13 . The remaining genes are novel; their functionality has not been experimentally validated. We picked a top hit, suprabasin (SBSN), and performed loss-of-function experiments for the mouse homolog of gene Sbsn using RNA-Seq. The analysis validates that Sbsn knockdown in mouse keratinocyte cultures down-regulates cornified envelope genes, suggesting an essential role of SBSN in epidermal differentiation. These results demonstrate the effectiveness of our paradigm in discovering key factors of epidermal development. As another application, cold-induced thermogenesis (CIT) can reduce body weight by increasing resting energy expenditure in mammals 14 . Genes involved in CIT can be promising therapeutic targets for treating obesity and diabetes. Thus, it is important to understand the underlying mechanism of CIT. Our paradigm detected potential CIT-related genes, including known CIT genes and novel ones, showing that the paradigm can be generalized easily to other biological processes. It is a promising integrative analysis approach to identify key factors in biological processes. Identification of candidate epidermal development genes. To identify key gene expression datasets that are likely to be related to epidermal development, data curation was performed. A total of 295 epidermis development genes (according to GO) were searched on ArrayExpress to query microarray datasets, and over 300 datasets were retrieved. Due to the limitation of the search function in ArrayExpress, many retrieved datasets did not have any perturbation of these epidermis development genes, even though the gene symbols were mentioned in the datasets. To overcome this problem, manual curation was performed on each retrieved dataset to retain relevant ones, and the manual curation resulted in 24 experimental comparisons from 17 datasets with gain or loss function of 14 epidermis development genes (Table S1 and Methods). DEG analysis Biological process related genes with high consensus scores Figure 1. Data processing paradigm flowchart. Data curation was performed to identify the gene expression datasets with the given biological process perturbed (e.g., the process is increased in CMP 1 with +1 and is decreased in CMP 2 or CMP m with direction −1). DEG analysis was performed on the curated datasets, and + − 1/ 1/0 represents the up-regulated, down-regulated, or unchanged genes, respectively. To prioritize important genes in the biological process for each gene in a curation dataset, an affinity score of + − 1/ 1/0 was calculated first by comparing the gene expression change and the regulation of the biological process, where +1 indicates that the gene (e.g., Gene 1 in CMP 1 and CMP 2) is positively related to the biological process, −1 indicates that the gene (e.g., Gene 2 in CMP m and Gene n in CMP 1) is negatively related to the biological process, and 0 indicates no relation of the gene to the biological process. No measurement (notated as NA, e.g., Gene 3 in CMP 1) indicates an unknown affinity of the gene in the dataset. By summing the affinity scores, a consensus score was calculated for genes in the perturbed datasets. Genes with higher consensus scores were identified as more related to the biological process. To determine the candidate genes potentially involved in epidermal development, differential gene expression (DEG) analysis was performed on the 24 experimental comparisons of the curated microarray datasets. Differentially expressed genes were identified under ≤ . q 0 05. The large-scale gene expression changes derived from our curated datasets provided a list of candidate genes that may be potentially involved in epidermal development (Fig. 2). To identify genes that are potentially critical in epidermal development, consensus gene scores were summarized for each gene from affinities on the 24 experimental comparisons. Eighty-one genes were identified as key genes related to epidermal development with a consensus score ≥6 (Table S2). The heatmap (Fig. S1) shows a majority of these genes with a +1 affinity score in skin-related cell types. This information suggests that these top genes may play a role in epidermal development. To infer the biological processes involved, GO analysis was performed on these top genes using Fisher's exact test (the null hypothesis is log-odds-ratio <2) with all the genes annotated in GO as the background. Several epidermis-related GO terms were enriched in these genes (Fig. 3). For example, the essential GO terms in the epidermis were enriched, such as keratinocyte differentiation, epidermal cell differentiation, epidermis development, skin development, cornified envelope, and keratinization. In addition, the GO terms involved in skin barrier formation were also enriched, such as fatty acid elongase activity, lipoxygenase pathway, and establishment of skin barrier. These enriched GO terms suggest that the top identified genes are critical in epidermal development. Because GO annotation is not complete for gene functions 15 , we manually curated functional annotations for the top identified genes. Of these genes, besides the 18 genes annotated in the GO term "epidermis development, " only three genes have loss-of-function experiments supporting their role in epidermal development. However, the majority of these identified genes have no functional experimental validation on epidermal development. Of the three genes with literature evidence, EDN1 (consensus score = 7) mediates the homeostasis of melanocyte (located at the bottom of epidermis) in vivo upon ultraviolet irradiation 16 . The loss function of ELOVL4 (consensus score = 6) represses the generation of very-long-chain fatty acids, which is critical for the epidermal barrier function, showing the important role of ELOVL4 in epidermis development 17 . The in vitro loss-of-function experiment of HOPX (consensus score = 6) leads to increased expression of cell differentiation markers in human keratinocytes, demonstrating its involvement in epidermal development 18 . To evaluate how well the roles of the identified genes are understood in epidermal development, we queried the PubMed literature database and examined the results. For each gene, the keyword used in the PubMed search was constructed as "<symbol>[tiab] AND (epidermis OR skin)". The search results showed that a large proportion of identified genes (~42% = 34/81) have no publications related to skin. Therefore, these understudied novel genes revealed potential candidate genes for new studies on epidermal development. In addition, the majority (>70%) of identified genes were not in the epidermis development GO term (Fig. S2). These novel genes demonstrate the ability of the paradigm to discover unknown factors in epidermal development. To demonstrate the effectiveness of the paradigm computationally, top-ranked genes using collective comparisons were compared to genes using individual comparisons (Text S1). Fig. S3 shows the significantly (p-value = . × − 3 6 10 9 ) increased epidermal development genes identified by the paradigm compared to differentially expressed genes derived from individual comparisons. Validation of Sbsn role in epidermal differentiation by loss-of-function and other experiments. Among the identified genes, a top gene (SBSN) (with a high consensus score of 9) was selected to validate its role in epidermal development. A phylogenetics-based GO analysis revealed enriched GO terms related to epidermal development using co-evolved genes of SBSN (Text S2, Fig. S4). In addition, a time-course microarray dataset showed an increased expression of SBSN upon epidermal differentiation (Text S3, Fig. S5). These results suggest a potentially critical role of SBSN in epidermal development. To determine the cellular component that Sbsn is involved with, we performed a study of the differentially expressed genes in differentiating mouse primary keratinocyte cultures from mice with Sbsn knockdown. In Sbsn knockdown mouse cultures, 326 genes were up-regulated, and 161 genes were down-regulated (Methods , Table S3, Fig. 4a). To investigate the functional roles of Sbsn, these differentially expressed genes were used to search for enriched GO terms 19 using Fisher's exact test (null hypothesized log-odds-ratio <2) with the genes expressed in the Sbsn knockdown mouse culture and the controls as background. Specifically, the cornified envelope GO term was found enriched in the genes down-regulated upon Sbsn knockdown (p-value < 0.05), and eight cornified envelope genes were down-regulated (Table S4). These results suggest the role that Sbsn may play in epidermal differentiation and cornified envelope formation. Atopic dermatitis (AD) is the most common chronic inflammatory skin disease 20 . IL-4, a type 2 cytokine, contributes to the development of AD. Because broad defects of cornified envelope have been identified in AD 21 , SBSN may play a critical role in AD via defective cornification. To investigate the putative role of SBSN in AD, differentiated primary normal human epidermal keratinocytes (NHEKs) were cultured to examine the expression levels of SBSN upon IL-4 treatments via RT-PCR. In the presence of IL-4 (at doses of 5 ng/ml and 50 ng/ml), SBSN mRNA levels in the differentiated cells were significantly decreased as compared to differentiated cells without cytokine treatment (Fig. 4b). These decreased expression levels of SBSN upon IL-4 treatment suggest a critical precursor role of SBSN in the development of AD via disruption of cornification-and further indicate an important role of SBSN in epidermal differentiation. To investigate the role of SBSN in AD, expression levels of three SBSN transcripts were measured in AD lesional/nonlesional and control skins via RT-PCR (Methods). A total of 49 skin biopsies were measured, consisting of 16 AD lesional skin biopsies, 16 AD nonlesional skin biopsies, and 17 healthy controls. The expression levels of SBSN transcripts were normalized to G6PD. SBSN transcript v1 (NM_001166034.1) showed a significantly decreased level in AD lesional skin compared to AD nonlesional skin and controls (Fig. 4c). The decreased expression levels of the full-length transcript of SBSN suggests an important role of this SBSN isoform in AD. Generalization of the paradigm as demonstrated by its application on CIT. To investigate the generalizability of our integrative analysis approach, we applied the paradigm to reveal thermogenesis genes in tissues upon cold exposure. We collected ten gene expression datasets from GEO (Table S5). These gene expression data were collected from tissues of mice treated with cold temperature to induce thermogenesis. Both microarray and RNA-Seq data were collected. Because thermogenesis is always activated upon cold exposure, the direction of thermogenesis is thus increased in all the 24 comparisons within the ten collected datasets. Using DEG analysis, the paradigm calculated the consensus scores for measured genes from 24 comparisons and identified 153 genes with a consensus score ≥6 (Table S6). These 153 identified genes were then used to perform GO analysis. Enriched GO terms are related to energy homeostasis (Fig. S6). Literature curation confirmed the functional evidence in CIT of some identified genes. For example, elongation of very-long-chain fatty acids (Elovl3, consensus score = 13) in ablated mice showed a proliferated metabolic rate in a cold environment, indicating a higher capacity for brown fat-mediated nonshivering thermogenesis. Thus, Elovl3 is a key regulator for CIT in adipose tissue upon cold exposure 22 . As another example, carnitine palmitoyltransferase 2 (Cpt2, consensus score = 11) depletion mediates the fatty acid oxidation in adipose tissue, which is required for CITs, suggesting the critical role of Cpt2 in CIT 15,23 . This second application of our paradigm in CIT suggests that the paradigm can be generalized to other biological processes. Our paradigm is a simple but important integrative data processing approach for gene expression data. Discussion We propose a gene expression data processing paradigm to identify key factors in biological processes. The collection of gene expression data enhances the identification of key factors in biological processes. The application of the paradigm for epidermal development revealed known and novel epidermal development genes. To validate the novel predictions, an understudied gene, SBSN, was specifically investigated for its potential role in epidermal differentiation. SBSN has been identified in the suprabasal layers of the epithelia in the epidermis 24 . Although SBSN was previously shown to be induced upon differentiation of epidermal keratinocytes, no loss-of-function study has been performed to demonstrate the functional role of SBSN in epidermis or skin. Our phylogenetics-based GO analysis suggests relevant biological processes in epidermal development for SBSN co-evolved genes (Fig. S4). RNA-Seq analysis in Sbsn knockdown mouse keratinocyte cultures revealed down-regulated cornified envelope genes, suggesting a role for Sbsn in epidermal differentiation. SBSN may also be critical in AD, an inflammatory skin disease, because AD has a broadly defective cornified envelope 21,25 . Due to the full-length isoform of SBSN potentially playing a more critical role in the development of AD (Text S4, Fig. 4c and S7) and IL-4 being involved in AD 20 , we examined the effects of IL-4 on human differentiating keratinocyte cultures and found decreased expression levels of SBSN in IL-4 treated compared to non-treated cultures (Fig. 4b). These results indicate that SBSN may be a target for aberrant cytokine production in AD. The paradigm identified key genes by collecting multiple datasets and integrating information from the collected datasets, especially from datasets of the cell or tissue types most relevant to the target biological process. However, due to the limited availability of such datasets in certain areas, relevant datasets from other cell or tissue types may also be used, for they generally will not worsen the results. This is consistent with the idea of ensemble learning, in which merging many weak and independent classifiers will result in a strong classifier 26 . The heatmap of identified genes (consensus score ≥6) showed seven experimental comparisons from epidermal cells clustered together (Fig. S1). To systematically cluster the 24 experimental comparisons, a hierarchical clustering analysis using an affinity distance metric (Text S5) grouped the comparisons into eleven distinct clusters (at a cutoff distance ≤ . 0 05) (Fig. S8). As a result, the seven experimental comparisons from epidermal cells were also consistently clustered together in the hierarchical clustering analysis. These results indicate that the experimental comparisons from epidermal cells contributed the most. In the future, with more datasets from epidermal tissues/cells generated, it may not be necessary to include datasets from nonepidermal tissues/cells, as the marginal contribution from them is likely to be negligible. The paradigm starts from gene expression datasets with the perturbation of a biological process. This data collection process is critical. As for the application in epidermal development, we searched ArrayExpress using the text of epidermal development for candidate gene expression datasets. However, none of the five retrieved datasets showed changes in epidermal development, leaving us with no data from reliance on the bare keyword search functionality offered by ArrayExpress (Fig. S9). Because known genes annotated in the epidermis development GO term provide candidate factors responsible for the regulation of epidermal development, datasets with these perturbed known genes can be a starting point for our paradigm. However, it should be understood that the paradigm is not limited to a single GO term. Due to incomplete annotation in GO 5 , genes in other GO terms, such as keratinocyte differentiation (GO:0030216) and epidermal cell differentiation (GO:0009913), can also play roles in epidermis development. Thus, starting from these additional genes along with the genes in the epidermis development GO term, the performance of the paradigm is expected to improve because more information may be borrowed from other relevant datasets. To apply our paradigm, it is critical to examine the collection of gene expression datasets. In addition, our paradigm can include both microarray and RNA-Seq data, as shown in the CIT application, enabling the inclusion of more data-leading to better results than with only one data type. To obtain a manageable number of identified genes, our computation analysis focused on genes with a consensus score ≥6. A simulation was performed to evaluate the empirical distribution of consensus scores in epidermal development and CIT, and a cutoff of ≥6 corresponds to an empirical p-value = 3 72 10 8 . × − and 1 09 10 8 . × − for epidermal development and CIT, respectively (Text S6 and Fig. S10). But other thresholds may also be used. Higher thresholds lead to fewer but more robust identified genes, while lower thresholds lead to more but less robust identified genes. An investigator should pick a cutoff appropriate for the intent of the investigation. For example, if the purpose is to identify more novel epidermis development genes for further experimental validation, genes with consensus scores lower than 6 can also be considered. In addition, the cutoff should be also related to the total number of comparisons used in the analysis. In general, with more comparisons, the cutoff for the consensus score should be greater. It is worth mentioning that genes with negative consensus scores, in general, do not contribute to regulation in biological processes. For the application of epidermal development and CIT, a consensus score cutoff ≤−5 is used to extract negative genes, and no enriched GO terms are related to epidermal development (44 negative genes) and CIT (95 negative genes) (data not shown). These results are consistent with the intent of the scoring scheme defined in Fig. 1. The number of genes with positive and negative consensus scores would be expected to be roughly the same due to normalization, but the positive genes, in fact, have longer tails than the negative genes (Fig. S11). Quantile testing shows significantly larger positive scores compared to the absolute negative score at 0.95-quantile for both applications (p-value < . × − 2 2 10 16 ) 27 . This suggests that genes positively correlated with epidermal development and CIT are more likely to be consistent across different experiments, indicating that these positive genes are more likely to be relevant to the respective processes. In summary, the paradigm is valuable in identifying key factors for biological processes using gene expression data. Methods Ethics statement. All skin samples were collected according to procedures (NKEBN/486/2011) previously approved by the local ethics committee (Independent Bioethics Commission for Research at Medical University of Gdansk). Written consent was obtained from all patients prior to enrollment in the study. Curating gene expression data related to epidermal development. We collected gene expression datasets related to epidermal development by manual curation according to the following procedure. First, we searched ArrayExpress using the keyword ("epidermis + development" OR "epidermal + development") AND organism: "homo sapiens", retrieving only five studies, none of which could be reused to study the epidermal development process because of no change in epidermal development in the datasets (Fig. S9). Therefore, we started from known epidermal development genes to curate datasets with the process perturbed. Specifically, genes from the GO 19 epidermis development (accession GO:0008544) term were extracted first for humans. Then, the official symbol of each gene was queried on ArrayExpress for human microarray datasets. Each retrieved dataset was manually examined to retain only the datasets with at least one epidermis development gene being SCIENTIfIC REpoRts | (2018) 8:9083 | DOI:10.1038/s41598-018-27258-8 perturbed (i.e., knocked out, knocked down, or overexpressed). To ensure proper downstream statistical analysis, any dataset with no replicates was discarded. Data processing paradigm of the perturbed expression data. To identify the genes related to a biological process, our data processing paradigm was performed on the gene expression data to capture the affinities between specific genes and the biological process. An affinity score of +1 or −1 means that the gene is positively or negatively related to the biological process. Specifically, if the expression of a gene is increased or decreased in a biological process that is increased, the gene has an affinity score of +1 or −1 for the biological process. Alternatively, if the biological process is decreased, these genes have an affinity score of −1 or +1. The affinity score was 0 or NA for the genes not differentially expressed or unmeasured. The detailed workflow of the paradigm is shown in Fig. 1. For a biological process, systematic data curation is performed to collect gene expression datasets with the process perturbed (increased or decreased). Using DEG analysis (Text S7 and S8) [28][29][30] , affinity scores are calculated for each gene in each comparison in each dataset. Finally, a consensus score is calculated by summing these affinity scores among the comparisons for each gene. High consensus scores suggest that the corresponding genes are potentially critical to the biological process. Thus, our paradigm is a general framework that can be used to identify the key factors in a biological process. NHEK cell culture and treatment. Primary NHEKs of neonatal foreskin were purchased from Thermo Fisher Scientific and were maintained in EpiLife Medium containing 0.06 mM CaCl 2 and S7 supplemental reagent under standard tissue culture conditions. The cells were seeded in 24 well dishes at 2 × 10 5 /well to form a confluent monolayer. In the following day, the cells were subjected to differentiation by increasing CaCl 2 to 1.3 mM in the culture media with or without the human recombinant IL-4 at designated concentrations. The cells were harvested for total RNA extraction before differentiation and were differentiated for 5 days. Total RNA was extracted using RNeasy mini kit according to manufacturer guidelines (QIAGEN, MD). RNA was then reverse transcribed into cDNA using superScript ® III reverse transcriptase from Invitrogen (Portland, OR) and was analyzed by real-time RT-PCR using an ABI Prism 7000 sequence detector (Applied Biosystems, Foster City, CA). Primers and probes for human SBSN (Hs01078781_m1) and 18 s (Hs99999901_S1) were purchased from Applied Biosystems (Foster City, CA). Quantities of all target genes in test samples were normalized to the corresponding 18 S levels. Sbsn siRNA in mouse keratinocyte culture and RNA sequencing. Primary murine keratinocytes were isolated from BALB/c neonatal mice as previously described 31 . Primary murine keratinocytes were cultured in a supplemented minimal essential medium (Gibco, Thermo Fisher Scientific Inc., Waltham, MA, USA) with 8% fetal bovine serum (FBS, Atlanta Biologicals, Flowery Branch, GA, USA) and 1% antibiotic (Penicillin Streptomycin Amphotericin B, Sigma), with 0.05 mM Ca 2+ concentration. A total of × 1 10 6 cells were seeded in each well of six well plates. Twenty-four hours after seeding, the siRNA mix (Opti-Mem serum free media (GIBCO), 75 pmol of siRNA (Dharmacon), and HiPerfect transfection reagent (Qiagen)) was added to cells. For SBSN, the siRNA used were Dharmacon, SMARTpool; siGENOME Sbsn siRNA M-054578-01-0005 and Control (mouse) SMARTpool; and siGENOME Non-Targeting siRNA Pool #1. Each condition was done in triplicates. To induce keratinocyte differentiation, a final concentration of 0.12 mM Ca 2+ was used. RNA was harvested 48 hours after siRNA transfection. Total RNA from cells was extracted using RNeasy kit (Qiagen) according to manufacturer instructions. A total of 100 ng was used to prepare the libraries utilizing a Neoprep Library kit (Illumina). RNA sequencing was performed in the NIAMS Genome Core Facility at the National Institutes of Health. DEG analysis using Sbsn knockdown RNA-Seq data in mouse differentiating primary keratinocyte cultures. To identify the differentially expressed genes in mouse differentiating primary keratinocyte cultures in which Sbsn had been knocked down with siRNA, the following analysis was performed. The raw RNA-Seq reads were aligned to the mouse (mm10) genome using STAR (version 2.5.1b) 32 with default settings. The uniquely aligned reads were retained to calculate the read counts for each gene against the UCSC KnownGene annotation (mm10), and a count table was constructed by counting the number of reads aligned uniquely to each of the genes for each sample. DEG analysis was performed by DESeq2 33 . To adjust the batch effect, a generalized linear model with a batch factor was used to model the read counts for all samples, and the Wald test was used to test the significance of differences in gene expression between Sbsn knockdown samples and controls. FDR adjusted q-values were then calculated from the p-values in the Wald test using the Benjamini-Hochberg procedure 34 . The log2-fold changes between Sbsn knockdown samples and controls were also calculated for each gene. The differentially expressed genes were identified under |log2-fold-change| > . 0 5 and < . q 0 05. RT-PCR analysis of AD skin. For the current study, arm skin samples (2 mm punch biopsies of 3 mm depth) were taken from AD patients (from lesional and nonlesional AD skin), and skin samples (controls) were obtained from healthy subjects. The nonlesional skin biopsy was performed at a 10 cm distance (at least) from AD skin lesions. Immediately after biopsy, the skin samples were placed in RNA-later solution (Qiagen) and were stored at -20 0 C. Total RNA was isolated using standard methods. The mRNA levels were analyzed by real-time RT-PCR with TaqMan primer-probe sets using the Path-ID Multiplex One-Step RT-PCR kit (Path-ID ™ Multiplex One-Step RT-PCR Kit (Applied Biosystems). The reference transcript G6PD was used as an internal standard and was amplified together with each target gene transcript in the same well using primers and probes, as shown in Table S7. The level of each analyzed transcript was normalized to that of the appropriate reference transcript. Data availability. The datasets used in this study are available in the National Center for Biotechnology Information's (NCBI's) GEO 2 with accession GSE100100.
v3-fos-license
2021-05-06T06:16:11.101Z
2021-04-29T00:00:00.000
233740823
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/18/9/4775/pdf", "pdf_hash": "e4817bf664dcf5b7cae81f72e554633b342a1cc6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41690", "s2fieldsofstudy": [ "Business", "Psychology" ], "sha1": "617bfcc38060a587fca0e26ac57620e179786c04", "year": 2021 }
pes2o/s2orc
The Impact of COVID-19 Restrictions on Mental Well-Being and Working Life among Faroese Employees The societal changes caused by COVID-19 have been far-reaching, causing challenges for employees around the world. The aim of this study was to assess the effect of the COVID-19 restrictions on mental well-being, working life, family life and social life among Faroese employees within a broad range of professions. A total of 1328 Faroese employees answered an anonymous self-report survey from 13 April to 4 May 2020. Employee mental well-being was only modestly affected by the restrictions and the respondents had a mean score of 50.7 on the Warwick–Edinburgh Mental Wellbeing Scale where a score between 41–44 is found to correspond with possible depression. Work commitment, work and family life, work satisfaction and work ability were all rated significantly worse after the COVID-19 outbreak than before (all p values < 0.005). Contrary to previous research, employees in health services assessed their work ability significantly higher than employees in teaching, and child and youth care (p < 0.05). Working parents had higher levels of stress and assessed their work ability significantly lower than employees without children (p < 0.05), and women tended to be more worried than men because of the pandemic. In conclusion, the overall mental well-being of Faroese employees was on an average level during lock-down in April and May 2020. Their working life seemed, however, to be worse than usual. Introduction The societal changes caused by COVID-19 (Corona Virus Disease -19) have been farreaching, as we face the second and third waves around the world. The strong measures taken to limit the spread of the disease have caused challenges for employees, and, since March 2020, there has been a large shift in the way we work [1,2]. Most professional branches have been forced to solve their work-related tasks differently than usual: Working from home, working alone, working in small groups, etc., which, among other things, has made it more difficult to stay in touch with managers and colleagues. The distinction between 'essential' and 'non-essential' functions [1] was early in the first phase a part of the official guidelines. The essential functions such as grocery stores, the acute functions in the hospital system, care homes for elderly and disabled people and child care for employees in the essential functions were to stay open, whereas the non-essential functions such as beauty parlors, child care for non-essential staff and schools were to close their services or let their employees solve their tasks remotely. These sudden changes have made the topic of employee well-being more critical and may pose different challenges for different groups of employees. For employees in the essential functions, the infection risk might lead to mental health concerns, stress and anxiety [2,3], while employees working remotely are at risk for experiencing loneliness and isolation, which has been associated with depression, suicidal behavior and other mental difficulties [4]. One study found perceived stress among individuals working from home to increase during COVID-19 [5]. tourism), and other workplaces rearranged their activities, e.g., by dividing employees in groups and limiting interaction between groups of colleagues. Social distancing was recommended, and people were encouraged to avoid social gatherings. After the first case, other cases followed quickly causing a spike in March, another spike in August, and yet another spike in December 2020. The government s approach has from the beginning been large scale testing, contact tracing and isolation combined with social distancing. Testing is still mandatory for visitors to the country [24,25]. At the time of writing, the Faroes have recorded 661 cases of COVID- 19 and there has been one fatality from COVID-19 in the Faroe Islands [25]. The aim of this study was to assess the effect of COVID-19 on mental well-being among Faroese employees within a broad range of professions. Because the pandemic put an immediate strain on the health care systems worldwide, we hypothesized that health care professionals' mental well-being and experience of working environment during lockdown would be more negatively affected than the mental well-being of other professionals. We also hypothesized that working parents would experience poorer mental well-being and higher levels of stress during lock-down, because of home-schooling, the lack of childcare and a general shift in domestic care obligations. Study Design and Participants The anonymously completed self-report survey was available on the online platform SoSci Survey [26], from 13 April until 4 May 2020. The survey was distributed via social media and facilitated to employees through Faroese unions. They survey was open to all employees older than 18, with access to the internet. Participation was voluntary and data collection was maintained fully anonymous, e.g., by not collecting any data that could potentially identify a respondent. Survey Items The survey consisted of 76 questions, targeted at examining how the COVID-19 restrictions affected the mental well-being, working life, family life and the social life of Faroese employees during lock-down. The questions in the survey also addressed how respondents experienced circumstances around their work situation (e.g., job satisfaction and perceived stress) under normal circumstances and during the COVID-19 outbreak. In addition, sociodemographic data were collected in the survey including age, sex, marital status, family situation, education and employment status, rating of overall health and possible mental health issues. The questionnaire was tailored by the authors to address the research questions. The rationale was to use multiple validated tools concerning mental well-being (Warwick-Edinburgh Well-being Scale) and working environment (from the Danish survey 'Working Environment and Health'), and to maintain high feasibility. The Warwick-Edinburgh Mental Well-being Scale (WEMWBS) was used to assess the respondents mental well-being the last 2 weeks. WEMWBS is a 14-item scale covering subjective well-being and psychological functioning, in which all items are worded positively and address aspects of positive mental health. The scale is scored by summing responses to each item answered on a 1 to 5 Likert scale. The minimum scale score is 14 and the maximum is 70. A high score indicates good mental well-being whereas a low score indicates poor mental well-being [27][28][29]. A score of 40 and below has been found to correspond to probable depression and a score of 41-44 to possible depression [30]. Population studies in Denmark, England and Catalonia found mean scores ranging between 49.8 and 58.1 [28]. A survey among Faroese high school students found a mean score of 50.9 (ranging from 49.0-55.5) [31]. The survey items addressing psychosocial working environment were translated to Faroese from the Danish survey 'Arbejdsmiljø og Helbred' ('Working environment and Health') developed by the Danish National Research Centre for the Working Environment [32]. The items addressed four aspects of working life: work commitment (five items: self-confidence; interesting tasks; importance of work; cheerful at work; preoccupied at work), work and family life (eight items: time at work; rate of working; deadlines; time pressure; availability after work; overtime; energy at work; work time vs leisure time), work satisfaction (1 item) and work ability (1 item). All but one item were answered on a 5-point Likert scale with answers ranging from 'not at all', 'to a very little extent', 'to some extent' 'to a great extent' to 'a very great extent' [33]. The respondents were asked to answer the same questions twice: (1) 'as they usually would assess their job' and (2) 'as they would assess their job during COVID-19 . The survey items 'effects of COVID-19 on working life, family life and social life' and 'coping with changes caused by COVID-19 had a free text option, where respondents could write other effects or other ways of coping than the answer options allowed. Statistical Analysis Data are presented as number of respondents and percentage (of total number of respondents) for categorical data and mean and standard deviation (SD) for normally distributed data. Paired samples t-tests were used to assess possible differences in working environment (work commitment, work and family life, work satisfaction and work ability) pre and during COVID-19. Binary logistic regression analysis was used to examine the relationship between good or poor mental well-being and these predictors: perceived stress, job satisfaction, work ability, age, education, gender and children. Binary logistic regression analysis was also used to examine the relationship between being an employee with children under the age of 18 or not and the same predictors. Multiple analysis of variance tests (ANOVAs) were used to explore possible differences between different professions in the survey concerning the items perceived stress, mental well-being, (during COVID-19) job satisfaction and (during COVID-19) work ability. The free text responses were analyzed by the first author, by reviewing all the comments and organizing them in categories. Data were analyzed using the Statistical package for the Social Sciences (IBM SPSS v. 25, IBM, Armonk, NY, USA) [34]. Results A total of 1328 people participated in the survey with a mean age of 34 (ranging from 19-71 years); 77% (n = 1025) were women. The majority (74.5%) of the respondents were employed in the public sector, and the largest branches were teaching, health services and jobs within child and youth care, such as daycare, kindergarten and recreation centers. Other respondents were employed in, e.g., social agencies, production industry, building industry, long-term care sector, financial sector and within administration. Nineteen percent were employed in the private sector. Most of the participants were married or in a relationship, and more than half of the participants had children under the age of 18 (Table 1). Participant Characteristics Marital status: Married/in a relationship, n (%) 1114 (83.9) Children under the age of 18: yes, n (%) 796 (59.9) a This category is comprised by the remaining branches. The Relationship between COVID-19 Restrictions and Mental Well-Being and Perceived Stress In total, 71% of the respondents evaluated their current overall health as good or very good. Furthermore, 79.2% of the respondents reported that their mental well-being, not at all or only to a small extent, was affected by the COVID-19 restrictions. The respondents' mental well-being was also assessed in the WEMWBS scale and showed a mean score of 50.7 (SD = 8.1, median = 51.0). The percentage of respondents scoring below 44 (signaling depression), was 21.8. When asked about perceived stress the last 2 weeks, 12.6% had felt stressed, and the majority (42.1%) in this group pointed to the combination of work and family life as the greatest source of stress (as opposed to work or family life). Of the respondents, 34.4% answered that COVID-19, to a large or very large extent, had caused concerns. Table 2 illustrates the distribution among the different types of concerns. The large majority (86.9%) felt that they, to a large or very large extent, could control their concerns. Table 2. Concerns caused by COVID-19 among respondents in the anonymously completed selfreport survey in the Faroe Islands conducted from 13 April until 4 May 2020, n = 1328. Concern Caused by COVID-19 n (% of Total Number of Respondents) Well Another recurring concern for the employees, apparent in the free text responses, was about carrying the disease to the workplace and thus infecting others, especially elderly or other vulnerable people. The respondents managed the changes caused by COVID-19 in different ways, which is illustrated in Table 3. Table 4 shows the results of a logistic regression analysis for mental well-being (measured with the WEMWBS) below or above 44. The results displayed in Table 4 suggest that six predictors significantly enable us to predict mental well-being (measured with the WEMWBS): perceived stress, job-satisfaction, work ability, age and education. Employees with high levels of perceived stress were more likely to have a WEMWBS score below 44 and thus having poor mental well-being (OR < 1.00). Employees aged 19-50 years and with a university degree of minimum three years of length, also had increased odds of a score below 44 (OR < 1.00). The probability of having a WEMWBS score above 44 was higher for employees who were satisfied with their job and reported higher levels of work ability (OR > 1.00). Working Life Around two thirds (66.8%) of the respondents experienced that the restrictions, to a large or very large extent, had affected their working life. The most influential changes were working other hours than usual, only or primarily working from home, working fewer hours, and having other tasks. Of the people that worked from home, 37.2% found it difficult to solve their work tasks satisfactory. Table 5 provides an overview of the challenges of working from home during lock-down, the main challenge being the changed communication and collaboration with colleagues. Spouse also working from home 131 (9.9) a Tools are, e.g., hardware, software and other office supplies necessary to solve the task; b The possibility to solve work-related tasks without interruptions. Results from a paired samples t-test showed that the respondents rated all but one of the 15 items addressing work commitment, work and family life, work satisfaction and work ability significantly lower during COVID-19 (means ranging from 2.7 to 8.0) than before the outbreak (means ranging from 2.4 to 6.8) (all p values < 0.005). The largest branches of industry represented in the survey were teaching, child and youth care and health services (see Table 1). To explore whether there were differences between these branches, the employees perceived stress, mental well-being, (during COVID-19) job satisfaction and (during COVID-19) work ability was compared. We grouped the variable 'branches of industry', which initially consisted of 21 categories, to four categories: teaching, child and youth care, health services and other branches. The category other branches was comprised by the remaining branches in the survey, e.g., employees in social agencies, production industry, building industry, long-term care sector, financial sector and within administration. Almost half of the respondents in this group (n = 268) were employees in the private sector, compared to the other three categories, which primarily were employed in the public sector. The results from the ANOVA tests showed that employees in health services assessed their mental well-being significantly higher than employees in the category other branches (but not teaching or child and youth care) (F (3, 1324) = 5.32, p < 0.05). Employees in health services also assessed their work ability significantly higher than employees in child and youth, teaching and in other branches (F (3, 439) = 10.9, p < 0.05). The differences between the four groups of employees in terms of perceived stress and job satisfaction were not significant (p values > 0.05). Family and Social Life A total of 69.2% of the respondents experienced that the restrictions, to a large or very large extent, had negatively affected their social life. When asked about the impact of the COVID-19 restrictions on their family life, 42.9% answered that the restrictions to a large or very large extent had affected their family life. The free text responses revealed that, as opposed to working life, more people mentioned the positive effects of the restrictions on family life, such as spending more time with their family and enjoying that in a different way than usual. Table 6 shows the results of a logistic regression analysis for mental well-being (measured with the WEMWBS) below or above 44. The results displayed in Table 6 suggested that five predictors were significantly associated with whether the employees had children or not: perceived stress, work ability, age and education. Employees without children under the age of 18 tended to have lower levels of perceived stress and to be older (OR < 1.00). They were also more likely to have a better work ability and a university degree of minimum 3 years of length (OR > 1.00) Gender Differences As mentioned above, most of the participants in this survey were women (n = 1025, 77.2%). In the logistic regression analyses, we found no gender differences. Overall, however, more women than men reported that the COVID-19 restrictions negatively affected their mental well-being, family life, social life and work (see Table 7). Furthermore, more women than men had worries because of COVID-19 (see Table 8). Discussion In this study, 1328 Faroese employees answered a survey about the impact of COVID-19 on psychological working environment and mental well-being during the lock-down from March to May 2020. In this period, many employees worked from home and limited their interaction with leaders and colleagues to prevent the disease from spreading. These initiatives also limited the possibility of collaborating and socializing with colleagues. Overall, the employees mental well-being (according to the Warwick-Edinburgh Mental Well-Being Scale) was on an average level, with a mean score of 50.7. A score in the range 41-44 has been found to correspond with possible depression [30], so the respondents mean score on the WEMWBS is well above the level of depression. The results also showed that 21.8% of the respondents scored below 44, meaning that quite a large proportion of employees reported a mental well-being corresponding with depression in comparison with, e.g., the prevalence of depression in Denmark, which is around 3-4% [35]. It is worth noting, though, that the WEMWBS is not a screening tool for depression and thus not a reliable measure of prevalence. In our study, perceived stress was associated with poor mental health, which makes sense considering the interrelatedness of these constructs: high levels of stress typically have an adverse effect on the overall mental well-being and vice versa. A university degree also predicted poor mental well-being in our study, which is similar to results from a recent Spanish study that found a relationship between higher education and higher levels of depression [14]. In other studies, higher socioeconomic status has been associated with better mental well-being and health in general [36,37]. We propose two ways of interpreting this result. First, Faroese employees with a university degree are more likely to have demanding jobs, such as leading workplaces and/or teams. This task was especially challenging during lock-down and thus might have affected these employees mental well-being negatively. Second, the result can be interpreted in light of selection bias, because of the homogeneity of the sample. As Table 1 illustrates, the sample consisted mainly of well-educated women in their mid-thirties, which can influence the generalizability of the results. Job satisfaction and better work ability also predicted a better mental well-being. Health care professionals rated their experience of working environment during lockdown higher compared to other professionals. In fact, employees in health care had a significantly better mental well-being than employees in the category other branches. Employees in health care also assessed their work ability significantly higher than employees in teaching, child/youth care and in other branches. These results are inconsistent with previous research that found a high prevalence of stress, anxiety, depression and other mental health difficulties among especially health care workers [9][10][11]. Our results could be explained by the nature of the changes caused by the restrictions for the different professions. Although health care professionals experienced changes in the organization of their work (e.g., working in smaller teams, longer shifts), they still maintained their core tasks, whereas teachers and professionals within teaching and child/youth care had to solve their core tasks in a different way than before, e.g., by teaching online or taking care of small children without too close physical contact. Besides suddenly finding new ways to solve the core task, the relationship between teacher and student/child, which is also a fundamental part of the core task, is compromised. The low rating of work ability could be understood in the light of these changes. Within the health care system, all non-acute treatments such as operations and outpatient treatments were postponed, and the number of COVID-19 related admissions turned out to be much lower than anticipated, which overall lessened the workload for the employees in the health care system. In fact, during the first wave, only eight patients were admitted to hospital, and no intensive care unit admissions or fatalities occurred [38]. The world-wide public support and celebration of front-line workers, especially nurses, doctors and first responders, is also likely to affect the mental well-being of health care workers. Appropriate acknowledgement can foster resilience and thus prevent mental health difficulties [39]. Although the mental health among Faroese employees was on an average level during lock-down, the respondents working life seemed to be worse. Two thirds of the respondents reported that the COVID-19 restrictions greatly affected their working life, which was visible in the significantly lower ratings of work commitment, work and family life, work satisfaction and work ability compared to how they usually feel about their work. These findings are consistent with previous research that found a high prevalence of work exhaustion and burnout among employees [12]. The dissatisfaction with the working environment during COVID-19 could be partly explained by the changed contact to colleagues, which employees found to be most challenging in remote working. Changed contact or no contact at all between colleagues affects the social capital negatively, compromising the possibility of forming and maintaining close workplace relations [6], and could in this study be linked to the significantly lower ratings of work commitment, work and family life, work satisfaction and work ability compared to how they usually feel about their work. Employees experienced more positive consequences of the COVID-19 restrictions in their family life compared to their working life. Lock-down of schools and workplaces meant more time at home with family, and some employees seemed to make the most of that time by doing activities, that are more driven by pleasure than by duty and activities that have a positive effect on mental well-being. The most widely used coping mechanisms among the Faroese employees were, e.g., maintaining a normal everyday life, paying attention to the news, maintaining regular contact with family and friends; mechanisms that are in line with advice from WHO on how to cope with stress during COVID-19 [40]. The respondents more often used healthy coping mechanisms instead of less healthy coping mechanisms such as using smoking or alcohol to deal with the situation. These activities correspond with programs promoting mental health such as the 'ABC for mental health', where the three components of positive mental health are being active, feeling connected to other people and doing something meaningful [41]. The positive consequences or positive coping mechanisms might have served as protective factors, preventing mental well-being from deteriorating, explaining parts of the main result which was that the respondents reported an average level of mental health. Although family can be considered a protective factor to prevent mental health from deteriorating, our study also suggested that employees with children under the age of 18 tended to experience more stress and a lower work ability during COVID-19. This result is somewhat consistent with the study of Pesce and Sanna who found that having young children at home was significantly associated with mood-worsening during lock-down [19]. Balancing work and family simultaneously with home-schooling and remote working was difficult and affected the ability to perform the work-related tasks. We found that more women than men had worries because of COVID-19, which is consistent with some of the previous research [19,21,42]. The level of worrying might be understood as a reflection of the fact that the lock-down period put an extra strain on especially women. The International Labour Organization estimates that women, under normal circumstances, perform a daily average of 4 h and 25 min of unpaid care work against 1 h and 23 min for men [43]. The changes caused by COVID-19, increasing the daily time spent in unpaid work for women, is likely to be one factor contributing to the extra worrying for women in this survey. As mentioned in the introduction, the Faroe Islands have managed to quickly gain control of three major peaks in number of infected cases since March 2020, which can be explained in several different ways. One of the measures used has been regular information from the authorities using press conferences and updated information on the official website, bringing forth an awareness among the Faroese people to change their behavior without using laws and legislation. The awareness and perception of risk has most likely contributed to behavior that minimized the spread of the virus [44]. Risk perception has in previous research been associated with the adoption of preventative health behaviors such as social distancing [45]. Another regulation of behavior comes from social control, i.e., how the surrounding society regulates our behavior [46]. In small, close knit societies, this mechanism tends to be more active, and might serve as an extra regulation to ensure that people are keeping the COVID-19 recommendations from the authorities, partly in fear of possible social consequences. A concern among the employees in this survey was the fear of being the disease carrier in the workplace and possibly infecting vulnerable people, e.g., in nursing homes. The fear of being stigmatized as a disease carrier or as infected has from other infectious diseases proven to be an important issue with implications for mental health, and the fear of stigmatization is thus an incentive to behavioral change [47,48]. One major strength of this study is that it includes employees from a broad range of professions, and not only health care professionals, as most of the previous research in the psychological effects of COVID-19 has focused on. It thus broadens the perspective concerning the effect of COVID-19 on employees. This study also has some limitations that should be taken into consideration. First, the cross-sectional research design prevents us from including the effect of time and, e.g., drawing conclusions on causal relationships. Second, this study was an online study, shared primarily through social media. This may have caused selection bias, attracting those who are active on social media and with a much higher representation of women, especially well-educated women within health care, teaching and child/youth care. This bias could affect the results, seeing that higher socio-economic status, including higher income and educational attainment, previously has been associated with preventative health behaviors [37]. The results may, therefore, not be representative for the entire population, and thus limit the generalizability. Conclusions In conclusion, the mental well-being of most Faroese employees was on an average level during lock-down in April and May 2020. This result may be partly linked to the effective elimination of COVID-19 in the Faroe Islands, because it shortened the lock-down period compared to other countries. Although the average mental well-being was above clinical range, a proportion of 21.8% reported a mental well-being that corresponds with depression, which is a large part compared to prevalence studies. High levels of stress predicted a poor mental well-being while high levels of job satisfaction and work ability predicted a better mental well-being. The respondents assessed their working environment (work commitment, work and family life, work satisfaction and work ability) significantly worse during the COVID-19 outbreak than before, and employees with children tended to experience more stress and to assess their work ability during COVID-19 significantly lower than employees without children. This was more evident among women who worried more than men about the consequences of COVID-19 and felt more negatively affected by COVID-19 than men. The influence of the lock-down on mental health has not received much attention in the public and the Faroese authorities have primarily focused on measures to contain the spread of the virus and protect the susceptible citizens, and much less on how to stay mentally healthy during the pandemic. The results from our survey highlight the importance of addressing mental health and working environment in a pandemic such as COVID-19 which may have far-reaching implications on people s daily life. Some employees highlighted the positive effects of lock-down, e.g., spending more time with family, while others reported negative effects during lock-down. Working parents might be especially receptive of the negative effects because of the shift in domestic care obligations and the increased burden of unpaid work during COVID-19. Furthermore, short-term effects of adverse working environment, stress and worrying can have long-term implications for individual mental health and family functioning.
v3-fos-license
2023-02-17T14:25:36.238Z
2017-11-16T00:00:00.000
256912717
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-017-15705-x.pdf", "pdf_hash": "ca3020d28339912c7bd8821913c8b49599da0725", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41692", "s2fieldsofstudy": [ "Biology" ], "sha1": "ca3020d28339912c7bd8821913c8b49599da0725", "year": 2017 }
pes2o/s2orc
Leveraging genome characteristics to improve gene discovery for putamen subcortical brain structure Discovering genetic variants associated with human brain structures is an on-going effort. The ENIGMA consortium conducted genome-wide association studies (GWAS) with standard multi-study analytical methodology and identified several significant single nucleotide polymorphisms (SNPs). Here we employ a novel analytical approach that incorporates functional genome annotations (e.g., exon or 5′UTR), total linkage disequilibrium (LD) scores and heterozygosity to construct enrichment scores for improved identification of relevant SNPs. The method provides increased power to detect associated SNPs by estimating stratum-specific false discovery rate (FDR), where strata are classified according to enrichment scores. Applying this approach to the GWAS summary statistics of putamen volume in the ENIGMA cohort, a total of 15 independent significant SNPs were identified (conditional FDR < 0.05). In contrast, 4 SNPs were found based on standard GWAS analysis (P < 5 × 10−8). These 11 novel loci include GATAD2B, ASCC3, DSCAML1, and HELZ, which are previously implicated in various neural related phenotypes. The current findings demonstrate the boost in power with the annotation-informed FDR method, and provide insight into the genetic architecture of the putamen. pharmacological target for schizophrenia or Parkinson's disease treatment 11 . Alterations in putamen activity or volumes have been implicated in psychiatric and substance use disorders [12][13][14][15] . However, how genes play a part in these neuroanatomical and functional characteristics remain mainly unknown. Our analysis to identify novel common genetic variants influencing the variation of putamen volume in the human population could be utilized as resources to examine genetic contribution to brain structure, function and disorders. Large GWAS have successfully identified thousands of single nucleotide polymorphisms (SNPs) associated with hundreds of human complex traits 16,17 , thus improving our understanding of the genetic basis of many human diseases and traits. The emerging consensus from GWAS suggests that complex traits and diseases exhibit a polygenic architecture composed of many individually small effects. A polygenic architecture poses challenges for GWAS, as a massive number of statistical tests reduce power considerably for detecting small signals. As widely recognized, SNPs that exceed the GWAS significance threshold explain only a small fraction of the heritability 18,19 . To mitigate this limitation of the standard GWAS approach, we employ a framework that concurrently uses genic annotations, heterozygosity, total linkage-disequilibrium (LD) scores, and summary statistics from GWAS 20 . We have shown that ranking SNPs according to these genome characteristics yields a larger number of loci surpassing a given threshold than ranking SNPs according to their nominal P values alone 21 . Ranking SNPs by incorporating genomic annotations and other sources of "enrichment" along with the P values obtained from existing large GWAS allows us to accelerate discovery of genetic variants associated with the phenotype of interest in a cost-efficient manner. This approach can be useful especially when phenotypes are very difficult to attain for a sufficiently large number of subjects, as is the case with brain imaging phenotypes. The rationale behind our framework is that polymorphism variations in and around genes have been shown to harbor more genetic effects than intergenic regions [22][23][24][25] . This observation suggests that some categories of SNPs such as regulatory and coding elements of protein coding genes are more enriched for genetic effects on a phenotype than other SNPs 20,26,27 . We use our previously developed LD-weighted genic annotation method that takes into account the LD structure to select SNPs that are related to various functional categories of the genome such as exon, intron, and 3′UTR 20 . In addition to the genic annotation, we use other information in the genome to improve gene discovery, including heterozygosity (H, where H = 2 f (1 − f); f is allele frequency for either of the two SNP alleles) and total LD scores of individual SNPs, because variants that are of high frequency and in regions of extensive LD are more detectable in GWAS 28,29 . We integrate these various sources of enrichment information to construct a relative enrichment score (RES) for each SNP, which was used in our previous study of Covariate-Modulated Mixture Model (CM3) 21 . RES is defined as the estimated enrichment (Xβ) obtained from a logistic regression model for the thresholded GWAS summary statistics using LD-weighted genic annotation categories and total LD scores with heterozygosity weightings as explanatory variables. We then re-rank the SNPs based on their RES (instead of GWAS P values), and categorize the SNPs into several strata. For each stratum, the stratum-specific information can be used to calculate a stratified True Discovery Rate (TDR). We hypothesize that by incorporating prior enrichment information of the genome in the analysis of genotype-phenotype mapping, we can improve power to discover common genetic variants associated with the putamen volume. Results The Q-Q plot of putamen stratified by relative enrichment scores. The stratified Q-Q plot shows different enrichment levels across RES strata, which deviate further away from the null line as RES increases (Fig. 1a). An earlier or greater departure from the null line (leftward shift) suggests a larger proportion of true associations for a given nominal P value. SNPs with higher RES (Xβ , see Methods), calculated by a logistic regression model incorporating annotation categories, total LD and heterozygosity, are more likely to be associated with putamen than those with lower RES. True discovery rate (TDR). Variation in enrichment across RES strata is associated with corresponding variation in TDR for a given P value threshold (Fig. 1b). The enrichment can be directly interpreted in terms of TDR formed by estimating 1 − P/Q for each nominal P value from the stratified Q-Q plots (see Methods). This relationship is shown for putamen, the corresponding estimated TDR increases as RES stratum increases. The top RES stratum contained SNPs reaching high TDR earlier than those in other strata, indicating its greater power to identify SNPs associated with putamen. Predicted stratified Q-Q plot and TDR. In addition to model-free Q-Q and TDR plots generated by empirical distributions, we applied a model-based method to fit Q-Q and TDR curves in each stratum. The fitted Q-Q plot (dotted curves of Fig. 1a) is generated by using Weibull-chi-square mixture distribution (see Methods) and then the fitted TDR plot (dotted curves of Fig. 1b) is estimated by 1 − P/Q for each nominal P value as described above. Specifically, the model-based method did not perform well in Stratum 1, but had a good fitting in Stratum 2 and 3 with a larger proportion of trait-associated SNPs. In addition, curves in high TDR (i.e. low FDR) were well fit by the parametric model, which might facilitate obtaining good predicted values of TDR for detecting significant SNPs. Lookup table. Given nominal P values, the lookup tables were constructed by interpolated FDR conditional on RES strata ( Supplementary Fig. S1b) and by interpolated FDR for all combined strata ( Supplementary Fig. S1a). In the lookup table, a gradual decrease of FDR from the bottom-left to top-right corners suggests improved enrichment by stratification of RES (shown as a gradual increase of −log 10 (FDR) in the figure) and smooth gradients indicate good interpolation for our FDR estimate. Given 0.05 of FDR threshold (i.e., ~1.3 for Scientific RepoRts | 7: 15736 | DOI:10.1038/s41598-017-15705-x −log 10 (FDR)), the corresponding nominal P value is around 10 −7 for lower level of RES, whereas nominal P value reduces to around 10 −3 for higher level of RES. P value and conditional FDR results. SNPs associated with putamen were identified by P value threshold of GWAS and FDR conditional on RES. To ensure that significant loci are independent, we removed SNPs with LD r 2 > 0.2 and retained the SNP with the lowest FDR P value in each LD block. For a GWAS threshold of P value < 5×10 −8 , a total of 4 independent SNPs located in different loci were found (Table 1). Given a threshold of conditional FDR < 0.05, we identified 15 significant independent SNPs (Table 1). Compared to the same threshold for unconditional FDR, 8 SNPs were identified. Although SNPs detected by P value are not entirely Figure 1. Stratified Q-Q plot of putamen volume. Stratified Q-Q and TDR plots overlaid with predicted lines show enrichment conditional on relative enrichment score (RES). (a) The greater degree of deflection of Q-Q curves from the expected null line is accompanied by higher level of RES strata, reflecting that SNPs in higher level of RES strata are more likely to be associated with putamen than those in lower level of RES strata. The dotted curves show predicted Q-Q curves from the mixture distribution. In each RES stratum, the Q-Q curve is fitted by using a mixture of Weibull and chi-square distributions. (b) TDR in each stratum is obtained from the corresponding Q-Q curve. The pattern of curves for different levels of RES is similar to the stratified Q-Q plot. It also shows that given a nominal P value, RES improves TDR estimates, indicating that stratification by RES enhances power to detect signals associated with putamen. The predicted TDR curve (dotted line) in each stratum is generated from the corresponding predicted Q-Q curve. To visualize SNPs associated with putamen, we constructed a Manhattan plot showing the FDR stratified by RES. The 15 independent loci were identified with a significance threshold of conditional FDR < 0.05 (Table 1), were plotted in the Manhattan plot ( Fig. 2) where gene names for those loci were also shown, except intergenic SNPs. Interestingly, several significant loci with stronger signals were distributed on chromosomes 14 and 18. SNP Method comparison: fgwas. Applying fgwas to our putamen GWAS summary statistics data, only one SNP was prominent at posterior probability > 0.5, two SNPs (in the same LD block) at posterior probability > 0.4, nine SNPs (in four LD blocks) at posterior probability > 0.1. The posterior probabilities for 15 significant SNPs were shown in Supplementary Table S1 and none of SNPs with posterior probability > 0.5. The SNP rs8017172 was most significant at levels of P value, conditional FDR and posterior probability. Discussion By applying a new method capitalizing on genic annotations, heterozygosity and total LD, we were able to model the cumulative probability distributions of SNPs assigned to different strata and detected 15 significant loci including 4 SNPs reported in the original GWAS paper. Using enrichment information increased the power to find more independent significant loci. Of the 15 loci influencing putamen volume, 4 have been reported in the original GWAS paper 3 . Among 11 novel loci, we identified an intronic locus (rs597583, P = 3.59 × 10 −7 , conditional FDR = 0.0174) within DSCAML1 (Down syndrome cell adhesion molecule like 1), which is expressed in the brain and produces cell adhesion molecule that is involved in formation and maintenance of neural networks and neurite arborization 30,31 . The chromosomal locus of this gene on 11q23 has been suggested as a candidate for neuronal disorders 31 , because 11q23 contains a number of genes and gene families expressed in the nervous system and harbors candidate regions for several diseases with neurological features 32 . Its paralog, DSCAM, a conserved gene has been found to be involved in learning-related synapse formation in aplysia 33 . We also detected an intronic locus within GATAD2B (GATA zinc finger domain containing 2B), which is a protein coding gene and may play a role in synapse development and normal cognitive performance 34 . Diseases associated with GATAD2B include mental retardation and severe intellectual disability with distinct facial features 34,35 . Two other associated genes, ASCC3 (activating signal cointegrator 1 complex subunit 3) and HELZ (helicase with zinc finger), encode proteins that belong to the helicase family for unwinding double-strands, which may be involved in DNA repair or RNA metabolism in multiple tissues with ubiquitous expression for ASCC3 and predominant expression in thymus and brain for HELZ 36,37 . In addition, DLG2 identified previously 3 and the novel locus, ASCC3, were reported to be suggestively associated with neurodegenerative diseases, Parkinson's disease 38 and multiple system atrophy 39 , respectively. The dopamine deficiency within the basal ganglia leads to Parkinsonian motor symptoms 40 and putamen is part of the basal ganglia. In patients with Parkinson subtype of multiple system atrophy, the volume of putamen was observed to be atrophic, and MRI signals in the putamen were shown to be marginally hyperintense 41 . This evidence suggested that DLG2 and ASCC3 might have pleiotropic effects on putamen and neurodegenerative diseases; on the other hand, they might influence neurodegenerative diseases through alteration of putamen. Please see Table 1 for the full list of the loci discovered. We have recently developed the FDR approach for improved gene discovery in complex genetic phenotypes [42][43][44][45] . Applying this approach to brain structure phenotypes, we increased discovery of loci jointly influencing schizophrenia and brain structure volumes 46 . The current findings suggest that re-prioritizing SNPs according to their characteristics is advantage for gene discovery in the context of FDR. It has increasingly become evident that certain genomic regions harbor more genetic effects on a given phenotype than other genomic regions 21 . The characteristic of heterozygosity for each SNP represents power to detect genetic effects in the sample population of association analysis. Heterozygosity is defined as 2 f(1 − f) and f is the SNP minor allele frequency, which is the genotype variance in the regression model with a higher value for common variants. It is known that allele frequency plays an important role in determining power of SNP associations 47 (Supplementary Fig. S2), and square root of heterozygosity (H) is directly proportional to GWAS summary statistics (z values) for each SNP. We also incorporate total LD scores from the reference genomes of the same genetic ancestry (i.e. European) when calculating the relative enrichment score (RES), because if a SNP is in a large LD block, it is more likely to be linked with one or more causal variants. All of these characteristics have predictive power for genetic effects on phenotypes thus are used to construct RES for each SNP. Their enrichment features were explored and visualized in Q-Q plots (Supplementary Fig. S2). As described in Methods, RES is defined as the predicted response Xβ from a logistic regression with these SNP characteristics as predictors, representing a composite score of estimated enrichment. All SNPs are re-ranked and stratified by their RES. The existing methods, such as those based on stratified and conditional FDR have been shown to be superior to the traditional GWAS because they incorporate auxiliary information for stratification 47,48 . Indeed, there is often natural stratification present in the data such as stratification by allele frequency 47 or genome annotation. Therefore, treating SNPs by strata with the incorporation of prior information, we increased the power to detect trait-associated SNPs. In comparison with fgwas 49 which incorporates annotation information, we detected additional seven SNPs that were not identified by P value or unconditional FDR, and fgwas detected one SNP with posterior probability > 0.5 which was already significant at genome-wide P value level. It is of note that our approach is in line with the stratified FDR method 47 (computation of FDR by strata), however, we make our FDR values continuous by computing FDR estimates on a grid and interpolating these estimates 48 . Presumably, the continuous estimates will more realistically reflect the FDR estimates for SNPs that fall in between the stratum Q-Q curves. We also build on the modeling framework initially presented in the CM3 in which the RES was formulated 21 . There are some methodological considerations in the approach. First, although we found more significant loci than those identified by the traditional GWAS approach, all the novel loci collectively only explained a small fraction of heritability, suggesting that most of the trait-associated loci are still uncovered. Second, some parameters in our model cannot be precisely determined, for example, the total number of the strata or the percentile coverage of each stratum. This caveat is partly due to the fact that the true underlying genetic architecture of complex traits is seldom known in advance (e.g., the level of polygenicity, or annotations and allele frequencies of causal variants), and elucidating this issue is the subject of on-going research. Changes in these parameters may affect the significance status for SNPs close to the threshold, but most of the highly significant SNPs remained robust. Third, our method uses summary statistics of GWAS and hence inherits the limitations of GWAS. For example, multi-loci association analyses that take into account effects of other SNPs may give more unbiased estimates of genetic effects for each SNP. Along this line, association findings from GWAS are susceptible to the presence of population structure. Although principal components may represent broad differences across the sample, other polygenic mixed linear models including genetic relationship matrices have been proposed to be less susceptible to population structures and to increase the precision of genetic effect estimation 50,51 . Fourth, our approach is conservative formulation for FDR estimation in the enriched strata. We assume π 0 = 1 in the model, but the top enriched stratum has lower π 0 (i.e., lower proportion of null SNPs) so that the FDR as P/Q can be overestimated. However, other aspects in the analysis such as correlations among SNPs might over-and under-estimate FDR. We tried to minimize this issue by randomly pruning SNPs with a stringent threshold of r 2 = 0.2. We also randomly pruned the SNPs with a less stringent threshold (r 2 = 0.8). The analysis identified the same 15 loci as those from the analysis of using r 2 = 0.2. Fifth, there are other methods that use prior information. Many of these methods are Bayesian association study methods and calculate Bayes factors or posterior probability of association 49,[52][53][54] . Some of the approaches are scalable to a very large number of functional annotations or characteristics of SNPs, and is relatively more complicated to apply in practice. Other methods propose various ways to include SNP characteristics such as multi-thresholding by varying the significant threshold at each SNP 29,55 or defining weightings to SNPs depending on prior information via multivariate regression 56 . Our current approach is built on our prior work 20,48,57,58 , which is based on a Bayesian two-groups mixture model for Fdr control by Efron 59 . Our straightforward approach complementary to other methods can be a useful tool for gene discovery. GWAS is an efficient tool to survey through genome-wide millions of loci for identifying any trait-associated SNPs in a hypothesis-free manner with all SNPs treated identically. As previously shown, SNPs from GWAS with sub-threshold P values account for a considerable proportion of the variance in independent samples, suggesting that these sub-threshold SNPs are enriched for genetic effects 60 . Our method is built on the GWAS approach and utilizes GWAS summary statistics in the framework of FDR as a screening tool to uncover subthreshold and high-priority candidates by incorporating genic annotations. The resulting FDR estimates may have utility as resources or databases for hypothesis generation, and could aid in more robust and meaningful candidate gene selection (e.g., testing causal genetic effects in biological experiments). Methods Participant samples. We obtained putamen GWAS results in the form of summary statistics from the ENIGMA consortium. The putamen GWAS summary statistic data consisted of 12,596 participants derived from 26 substudies with all European ancestry, which is a subset of GWAS discovery sample (N = 13,171) published in 2015 3 . Putamen GWAS was used for its better power than GWAS of other subcortical structures. All participants in substudies gave written informed consent and sites involved obtained approval from local research ethics committees or Institutional Review Boards 3 . Putamen structural measure. The subcortical putamen measure was obtained from structural MRI data collected, processed and examined for quality at participating sites, following a standardized protocol procedure (http://enigma.ini.usc.edu/protocols/imaging-protocols/) to harmonize the analysis across sites 3 . In addition, the measure of head size (intracranial volume, ICV) were calculated and corrected for the subcortical measures in the association analyses 3 . Genotyping and imputation. Samples were genotyped using commercially available platforms and assessed for genetic homogeneity using multi-dimensional scaling (MDS) analysis to exclude ancestry outliers in each substudy 3 . SNPs with low minor allele frequency (<0.01), poor genotype call rate (<95%), and deviations from Hardy-Weinberg equilibrium (P < 1 × 10 −6 ) were filtered 3 . The imputation and quality control procedures were followed by the protocol (http://enigma.ini.usc.edu/protocols/genetics-protocols/) using MaCH 61 for haplotype phasing and minimac 62 for imputation 3 . Poorly imputed SNPs (with r 2 < 0.5) and SNPs with low minor allele count (<10) were removed. The total number of SNPs included in the analysis for each substudy ranged between 6.9-10.5 million. Genome-wide association analysis. The association analysis between putamen measure and each SNP (additive dosage value) was based on a multiple linear regression model controlling for age, square of age, sex, 4 MDS components, ICV, diagnosis (when applicable) and centers/scanners (for substudies with data collected from several centers/scanners) 3 . The protocols used for testing association can be found online (http://enigma.ini. usc.edu/protocols/genetics-protocols/) with mach2qtl 61 for substudies of unrelated subjects and merlin-offline 63 for family-based designs 3 . Meta-analysis of genome-wide association results from substudies. The GWAS results from each substudy were corrected for genomic inflation 3 . The meta-analysis was performed using a fixed-effect, inverse-variance model implemented in the software package METAL 64 . SNPs for the meta-analysis were reduced into ~2.5 million SNPs based on pre-calculated LD-weighted annotation scores for individual SNPs (see the section below). The correlation structure of SNPs for calculating annotation scores was determined by an LD matrix of 2,549,449 autosomal SNPs generated from the European reference sample in the 1000 Genomes Project phase1 v3 within 1,000,000 base pairs (1 Mb) 20 . LD-weighted genic annotation. Each SNP analyzed in our study was annotated with LD-weighted genic annotation scores. The score was calculated based on the European reference sample provided by the November 2012 release of the Phase I 1000 Genomes Project (1KGP). Specially, each SNP in the 1KGP reference panel was initially assigned to a single mutually exclusive genic annotation category based on its genomic position (the UCSC gene database, hg19). Eight genic annotation categories were used: exon, intron, 5′ untranslated region (5′UTR), 3′UTR, 1 and 10 kilo-base pairs upstream of the gene transcription start positions, and 1 and 10 kilo-base pairs downstream of gene transcription end positions 65 . Pairwise LD scores (r 2 ) between SNPs were calculated. For each SNP, a continuous, non-exclusive LD-weighted category score was assigned as the LD weighted sum of the positional category scores for variants tagged in each of the eight categories mentioned above. By incorporating LD information, the annotation of individual SNPs reflects the weighted annotation in the context of underlying linkage blocks. For detailed information on SNP annotation, score construction and quality control see Schork et al. 20 . Relative enrichment score (RES). Let p denote the P value of a particular SNP from GWAS summary statistics data. We defined y = 1 if p ≤ p thresh (p thresh = 10 −3 in the current study) and y = 0 otherwise, to divide the SNPs into those that are more likely to have a non-null effect and those that are more likely to have null effects. A multiple logistic regression model was fit: logit[Pr(y = 1 | X = x))] = (β 1 x 1 + β 2 x 2 + … + β k x k )H, where the x i , i = 1…k are the nine predictors for a SNP's association with the phenotype. We included the genic annotation scores from the eight categories and total LD scores (TLD) weighted by heterozygosity (H = 2 f(1− f), where f is the SNP minor allele frequency from the 1KGP European reference panel), because they have been shown to associate with strength of association and probability of replication for many complex phenotypes 20 . The RES for the SNP is defined as the estimated value, β Xˆ, from the above logistic regression model. We have used this RES approach in a previous paper 21 . Before computing the RES, SNPs were randomly pruned at LD r 2 < 0.8. Correlated SNPs do not affect β estimation so we prune SNPs at a liberal threshold. GWAS summary statistics data used to calculate RES ideally should be an independent data set from the data set for gene discovery to avoid overfitting problems by fitting the same data set twice (i.e., calculating RES first and estimating conditional FDR second). However, it is often hard to obtain two or three independent GWAS data sets of a given phenotype (the third one for replication analysis). Our prior work and that of others have observed that height is extremely polygenic and its pattern of SNP associations has several typical features such as that associated signals are near genes 20,66 . Height can be used as a proxy of a generic phenotype for complex traits and its GWAS summary statistics can be used to locate polygenic loci in the genome, when multiple independent GWAS data sets of the phenotype of interest are not available. We previously adopted a similar approach using height GWAS to train the logistic regression for computing SNP enrichment scores 20,67 . Stratified Q-Q plots and enrichment. Q-Q plots are standard tools for assessing the degree of similarity between two cumulative distribution functions (CDFs). When the probability distribution of GWAS summary statistic P values is of interest, under the global null hypothesis, the theoretical distribution is uniform on the interval [0,1]. If nominal P values are ordered from smallest to largest, so that P(1) < P(2) < … < P(N), the corresponding empirical CDF, denoted by "Q, " is simply Q(i) = i/N, where N is the number of retained SNPs. Thus, for a given index i, the x-coordinate of the Q-Q curve is Q(i) and the y-coordinate is the nominal P value P(i). Instead of plotting nominal P values against empirical P values, in GWAS it is common practice to plot −log 10 nominal P values against the −log 10 empirical P values, Q, so as to emphasize tail probabilities of the theoretical and empirical distributions. Leftward deflections of the observed Q-Q curves from the projected null line reflect increased tail probabilities in the distribution of test statistics and consequently an over-abundance of low P values compared to that expected by chance. We qualitatively refer to this deflection as "enrichment" 20,43 . To assess improved enrichment afforded by genic annotations, heterozygosity and total LD, we used stratified Q-Q plots based on RES. We classified SNPs with the bottom 25-30% RES as the first stratum and the SNPs with the top 1-5% RES as the last stratum. The rest of SNPs are in the second stratum. There is no overlapping set of SNPs between strata. The reason for uneven placement of stratum cut-offs at the two ends of the RES distribution was based on our previous observation that, for the distribution of effects in complex traits, a large proportion of SNPs have negligible effects and a very small proportion of SNPs have non-negligible effects. We then constructed RES stratified Q-Q plots of empirical quantiles of nominal SNP association with putamen for all SNPs, and for subsets of SNPs in each of three strata determined by their RES. Improved enrichment for trait-associated signals is present if the degree of deflection from the expected null line is dependent on the level of the RES. Specifically, the SNPs with higher RES showed a greater degree of deflection from the expected null line. False discovery rate. The 'enrichment' seen in the Q-Q plots (i.e., the leftward deflection from the null line) can be directly interpreted in terms of False Discovery Rate (FDR). To reduce the effect of correlation among SNPs in FDR estimation, SNPs were randomly pruned at LD r 2 < 0.2. For full details of FDR estimation, please see previous papers 48,57 and Supplementary Information. Parametric model. The shape of the empirical distributions depicted in the Q-Q plots resembles the shape of the distribution function of a mixture of Weibull and chi-square distributions. So, for each RES stratum we modeled the Q-Q curve with a function proportional to the distribution function of a Weibull-chi-square mixture to compute stratum-specific predicted FDR. We assumed different scale parameters for the two component distributions. Further, an exploratory analysis showed that a value of 0.5 is a reliable choice for the shape parameter of the Weibull component. Keeping the shape parameter fixed at 0.5, the unknown parameters of the mixture were estimated by maximizing a cost function using unconstrained nonlinear optimization, where the cost function is proportional to the logarithm of the likelihood function of the parameters given the observed SNP distribution. The dotted line in Fig. 1a gives a graphic presentation of the predicted Q-Q curve from the mixture distribution using estimated parameters. The predicted TDR curve in each stratum is generated from the corresponding Q-Q curve and 1-FDR. Lookup table. We used heat maps to illustrate lookup tables to visualize variations of FDR across and within RES strata shown in Supplementary Fig. S1. Unconditional FDR is denoted as FDR obtained from the predicted Q-Q curve of all SNPs estimated by using Weibull-chi-square mixture distributions based on the 10,000-bin empirical quantile. Specifically, for each SNP the unconditional FDR value was obtained by linear interpolation from the predicted FDR values of 10,000 bins and illustrated by an unconditional FDR lookup table (Supplementary Fig. S1a) corresponding to variations of P values. Conditional FDR values 48 were generated by bilinear interpolation from the predicted FDR values of 10,000 bins across three strata and displayed in a conditional lookup table (Supplementary Fig. S1b) reflecting RES strata against nominal P values. The values of FDR in terms of −log 10 (FDR) are illustrated by gradient colors in the lookup tables with color bars. Smooth gradients indicate good interpolation for FDR estimate of each SNP and colors varied from dark to light show enrichment improved by increasing RES. Manhattan plot. To illustrate the localization of the genetic markers associated with putamen conditional on RES, we constructed a 'RES-stratified Manhattan plot' by plotting all SNPs within an LD block in relation to their chromosomal location. All SNPs without pruning are shown as individual points but only the most significant SNP with respect to conditional FDR in each LD block is illustrated with its gene name in the plot. In each LD block, FDR values of SNPs were ranked in ascending order and SNPs that have high LD (r 2 > 0.2) with top SNPs were then removed. Thus, we retained the most significant SNP associated with putamen in each LD block. The large points and small points represent significant (FDR < 0.05) and non-significant SNPs, respectively. Two colors, red and black, denote signals from conditional and unconditional FDR, respectively. The red gene names denote the loci with FDR < 0.05. Method comparison using fgwas. To compare with other methods incorporating annotation information, we performed an additional analysis using fgwas 49 (https://github.com/joepickrell/fgwas) which incorporates multiple functional annotations to inform GWAS. This method calculates the posterior probability that any given SNP is causal based on an empirical Bayes approach. We included eight annotation categories (exon, intron, 5′UTR, 3′UTR, 1 and 10 kb and 1 and 10 kb downstream) which is identical to RES in our analysis.
v3-fos-license
2016-05-12T22:15:10.714Z
2013-01-05T00:00:00.000
4795731
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-13-6", "pdf_hash": "70b47cd8bc1ac5b5947972da46b307ab91156065", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41693", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8391a31b14121998d9655ab2e9679e1f5ca13c86", "year": 2013 }
pes2o/s2orc
Using intervention mapping to develop a work-related guidance tool for those affected by cancer Background Working-aged individuals diagnosed and treated for cancer require support and assistance to make decisions regarding work. However, healthcare professionals do not consider the work-related needs of patients and employers do not understand the full impact cancer can have upon the employee and their work. We therefore developed a work-related guidance tool for those diagnosed with cancer that enables them to take the lead in stimulating discussion with a range of different healthcare professionals, employers, employment agencies and support services. The tool facilitates discussions through a set of questions individuals can utilise to find solutions and minimise the impact cancer diagnosis, prognosis and treatment may have on their employment, sick leave and return to work outcomes. The objective of the present article is to describe the systematic development and content of the tool using Intervention Mapping Protocol (IMP). Methods The study used the first five steps of the intervention mapping process to guide the development of the tool. A needs assessment identified the ‘gaps’ in information/advice received from healthcare professionals and other stakeholders. The intended outcomes and performance objectives for the tool were then identified followed by theory-based methods and an implementation plan. A draft of the tool was developed and subjected to a two-stage Delphi process with various stakeholders. The final tool was piloted with 38 individuals at various stages of the cancer journey. Results The tool was designed to be a self-led tool that can be used by any person with a cancer diagnosis and working for most types of employers. The pilot study indicated that the tool was relevant and much needed. Conclusions Intervention Mapping is a valuable protocol for designing complex guidance tools. The process and design of this particular tool can lend itself to other situations both occupational and more health-care based. Background It has been estimated that around 1 in 3 people (33%) will develop cancer at some point in their lifetime [1]. The most commonly diagnosed cancers are prostate, lung, colorectal and stomach for men; and breast, cervical, colorectal, and lung for women [2]. Together, these cancers account for 40% of all cancers worldwide [2]. In the UK, around 90,000 people of working-age are diagnosed with cancer annually [3], a time when career and work-related issues play an important role in their lives. However, advances in early detection and treatment of cancers have led to improved prognosis with five-year survival rates for all cancer combined reaching 50% [1]. Therefore, a high proportion of those treated successfully are able to resume their lives, including work [4] and between 41-84% of those treated for cancer are reported to return to work following treatment [5]. Evidence suggests, however, that many of these individuals can experience new physical limitations as a result of their illness and treatment [3]. Changes to physical function and long term/latent treatment side effects such as fatigue, depression, pain, cognitive deficits, can lead to a poorer overall quality of life [6,7], particularly with regard to working life [8][9][10][11]. These long-term effects may cause impairments that delay or prevent individuals returning to work. A review by Menhert [12] found the average length of sick leave for a person treated for cancer to be 151 missed days from work [12]; and cancer is associated with longer sick leave than other chronic conditions [13]. Sickness absence is an important economic outcome as missed days from work is a cost to society as well as to the employer and employee. Many employees affected by cancer want to work not just for financial reasons, but for overall health, well-being and higher quality of life [14]. However, those continuing to work during treatment, or return to work following cancer treatment; are more likely to have poor work ability compared to those with other chronic condition [13,15]. In addition, some employees affected by cancer experience job discrimination and lack of support from managers and occupational health professionals [9]. Some individuals are unable to return to work (RTW) and are at high risk of unemployment [8]. This has financial implications as well as further long-term implications for their health and overall quality of life. In response to the work-related needs of those affected by cancer, a number of non-UK based healthcare-led interventions have been reported. These include providing information, counselling or advice about work or work-related issues [16][17][18][19][20][21][22], learning self-management skills in striving toward personal goals such as work [22], vocational training, job search assistance [23] and high-intensity physical training [24]. However, in nearly all of these interventions, breast cancer was the most common diagnosis and few of the interventions involved a multi-disciplinary approach that included health care services, the workplace and/or the employer. At present, RTW interventions are complex to design and implement. Furthermore, due to the use of different study designs and measures, it is difficult to adequately judge the RTW success and other work outcomes (for example, work ability) of these interventions [25]. RTW interventions are further complicated by the different healthcare settings, workplace settings and stakeholders that exist, each with their own distinctive environment and the potential to impact return to work [26], making it difficult to interpret their success. Although a number of robust interventions are underway [27,28], a recent review suggests that there are few support services in the UK designed to help people remain in, or return to work after cancer [29]. Issues relating to return to work for those affected by cancer have been identified as a major area for improvement by the UK National Cancer Strategy [30]. To address this, we developed a work-related guidance tool for those diagnosed with cancer to help them manage effectively their work or the return to work process so that they can make a timely return to work, manage the impact of their cancer-related health on their work, and manage the impact of work conditions upon their cancerrelated health. The need for such a tool was prompted by the knowledge that healthcare do not necessarily comprehend the work-related needs of patients [31] and employers do not understand the full impact conditions such as cancer can have upon the employee and their work [32]. In these circumstances, it is often the patients/employees who lose out on the help and support they need to be able to return to work and/or prevent work disability. Therefore, we envisaged that the tool would be a self-led intervention that would enable those who have been diagnosed with cancer to take the lead and identify their workrelated capabilities and limitations in relation to their diagnosis, prognosis and treatment, in consultation and discussion with a range of different healthcare professionals, employers, employment agencies and support services. In this respect the tool can be used by individuals with most types of cancer and in most work situations including those considering retirement or a change of employment. In addition, the tool would help those affected by cancer find solutions to their work-related needs by providing structural guidance in seeking relevant information and support from appropriate stakeholders, thereby enhancing the exchange of information. Therefore, the primary aim of the work-related guidance tool was to help those with cancer reduce sick leave and possible work restrictions and prevent unemployment and work disability by identifying work adjustments or ways to manage work with regard to their cancer-related health. An intervention mapping (IM) protocol [33] was used to design the tool and to ensure it was grounded in evidence and in theory. Whilst traditionally used to develop health promotion programmes, IM has also been used for designing interventions, particularly complex interventions such as RTW programmes [26,34]. This is because RTW interventions require a tailored and multi-factorial approach directed at various settings and stakeholders [26,35], and IM provides a structured systematic framework within which to develop, implement and evaluate an intervention. Although we are not implementing or designing an intervention per se, but designing a work-related guidance tool that may be used for intervention purposes, IM is well suited for this process. This is because IM has been used to develop similar intervention tools for other health conditions such as occupational health guidelines to prevent weight gain among employees [36]. A detailed overview of how IM was used to design the tool is the focus of this paper. Methods IM is a stepwise approach for theory and evidence based development and implementation of interventions. It consists of the following six steps, each leading to a product that guides the next step [33]: 1) a needs assessment; 2) the Identification of outcomes, performance objectives and change objectives; 3) selecting theorybased methods and practical strategies; 4) developing program components and materials; 5) planning for program adoption, implementation, and sustainability; and 6) planning for evaluation. Although presented as steps, IM is flexible process which makes it possible to oscillate between steps as new perspectives are gained. In our study, we used steps one to four (creation of a work-related guidance tool) to ensure the resource development was grounded in theory and evidence [33,37]. This was important since the desired outcome of the tool was its wide use by individuals with different cancer diagnosis employed in a wide range of employment settings. Therefore, the present paper focuses mainly on how steps one through to four of the IM procedure were used to develop the work-related guidance tool. Step 5 is outlined briefly in the context of planning for program adoption as well as evaluation by others. Step 1: Needs assessment The first step of the intervention mapping process was to conduct a needs assessment. This initially involved holding discussions with a key stakeholder, Macmillan Cancer Support, who is involved in both RTW research and in supporting employees with cancer to return to work. The objective of the needs assessment was to establish the rationale for a work-related guidance tool and to create an overview of the possible content of the tool. This was a three-stage process that included: 1) a review of the existing literature regarding cancer and work; 2) a review of existing work-related guidance tools; and 3) collection of new data using focus group discussions. The purpose for such a guidance tool was identified through these meetings, focus groups and the literature review. The focus of the literature review was to identify the employment and work-related issues in adult cancer patients and the existence of any work-related guidance tool. For the academic literature search we searched the databases PubMed, Medline, Web of Science, PsychInfo and Google Scholar, combining 'cancer' with each of the following terms: 'employment/work review' , 'employment' , work needs' , 'work ability' , 'work disability' , 'work limitations' , 'work adjustments' , 'return to work' , 'work issues' , 'work changes' , 'sickness presenteeism' , 'work problems' , 'work restrictions' , 'work difficulties' , 'sickness absence' , and 'sick leave'. Reference lists of the relevant articles were also searched. To identify the existence of any work-related guidance tools, we used the search terms: 'tool' , 'measure' , 'questionnaire' , 'scale' , 'instrument' 'assessment' , and combined each of these with 'patient self-management' , 'cancer self-management' , 'cancer, work and self-management'. The reference lists of identified articles were searched manually for existing measures or tools. Additionally, organisations and specialists in the area of rehabilitation were contacted by the research team to further identify self-management tools that focused on return to work and work ability. All searches were restricted to between January 2000 and October 2010. The lay literature search focused on websites and pamphlets written and published by cancer charities (for example, Macmillan Cancer Support, Bowel Cancer UK) and intended for the general public. Appropriate websites were identified via the Google search engine. Any external links to other websites were also searched but only if they were tailored towards financial, legal and employment matters in the UK (for example, the Department for Work and Pensions, Jobcentre Plus). Finally, focus groups were conducted with those who were being treated for cancer (with curative intent) or had recovered from cancer. The purpose of the focus groups was to explore additional gaps from the literature review in patient knowledge and gaps in information/advice received from healthcare professionals (and other stakeholders groups) with regard to cancer and work issues. The focus groups also informed the content and structure of the tool. Focus group participants and procedure Participants for the focus groups were recruited from various Macmillan Cancer Support sources (for example, Macmillan Cancer Voices Network, Macmillan Online Community, Macmillan Face book Group, and Macmillan e-newsletters/bulletins), cancer support groups and media press releases (including local newspaper and radio advertisements). The following inclusion criteria were applied for participation: 1) to be aged between 18-65 years; 2) to have been employed at the time of diagnosis; and 3) to have been diagnosed with cancer no more than five years ago. The criterion of five years was chosen to provide a reasonable time frame in which to expect return to work, to enhance accurate recall of recent work issues and to gain knowledge about the impact of late and/or latent treatment side effects on work. Those still wishing to participate contacted the researcher directly and completed a recruitment questionnaire (either in person or over the telephone) to ensure that they met participation criteria. Those identified as being suitable for study enrolment were asked to provide written informed consent. The participants who took part in the focus group were selected and considered as 'experts' based on their knowledge and experience of living with a diagnosis of cancer. The study was approved by Loughborough University's Ethical Advisory Committee. As we were not intending to recruit patients directly from NHS Trusts or run a healthcare intervention, NHS ethical approval was not required. Thirteen participants (aged 34-63) were recruited and took part in one of two focus groups at hired venues. As the majority (n=10) were female, individual interviews were conducted with six males (aged 37-65) with or recovering from cancer, in order to prevent gender bias amongst the study findings. Within the whole sample, there was a variety of cancer types and occupations see Table 1 for a summary. The overall organisation and moderation of the focus groups and interviews was co-ordinated and managed by the same researcher (KEAK). An interview schedule was used to provide a degree of consistency and structure to the process. Questions to facilitate discussion were open ended around the following topics: a) Existing information surrounding cancer and work issues b) Information needs surrounding vocational (and other) rehabilitation support c) Financial implications of cancer d) The impact of long term and/or latent side effects of treatment on work e) Psychological and emotional needs at work f ) Training and new skills needed to support return to work g) Dealing with health insurers h) The interview/recruitment process (i.e. with a prospective employer) All participants were provided with the interview questions before the start of the focus group or interview to optimise accurate recall. In addition, prompts were used as appropriate to gain clarification and to encourage respondents to elaborate on relevant topics. The focus group discussions and interviews were digitally recorded and transcribed verbatim. The duration of the focus groups and interviews was approximately one hour. Data were analysed using thematic analysis [38]. The reliability of the analysis was ensured through a systematic review of the data by all members of the research team. Following agreement on the themes identified, a table of themes was drawn up and the text passages were coded accordingly. Step 2: Identification of intended outcomes and performance objectives The purpose of Step 2 of the IM procedure was to specify who and what would change as a result of the work-related guidance tool. First, we defined the desired behavioural and environmental outcomes for the target group that need to occur in order to affect the determinants of the overall behavioural objectives identified in step 1. The overall desired behavioural outcome was for those with/recovering from cancer to identify possible solutions to work-related needs (such as work adjustments and ways to manage work). Specific desired outcomes were based around this. Second, performance objectives for each of the desired behavioural outcomes were specified. These are a step-by-step description of what the participants will do or how an environmental condition will be modified (including who will create the change) [33]. For example, if a desired behavioural outcome is for those with/recovering from cancer to make informed decisions related to their cancer and work issues, then the performance objective for that outcome would be for participants to identify appropriate support and resources. Performance objectives were then examined in light of determinants of behaviour and the environment to produce change objective statements. Change objectives are the most immediate targets of an intervention [33] and are specified in terms of who or what needs to change and/or be learned in order to affect the performance objective. As one of the aims of our intervention was to develop a tool to be used by those with/ recovering from cancer but that would be able to influence relevant stakeholders (e.g. employers, healthcare professionals) performance objectives and change objectives were also developed in relation to this. In order to develop change objectives, each performance objective is scrutinised separately and appropriate theoretical determinants identified. For example, if a performance objective is for individuals to 'self-manage' their use of the work-related guidance tool, an appropriate theoretical determinant may be self-efficacy [39]. Step 3: Selecting theory-based methods and practical strategies The third step of the intervention mapping process involved identifying suitable theoretical methods to change behaviour and translating these into practical strategies. Bartholomew [33] states that the goal of step 3 is to use a conceptual model or theory (for example, socio-cognitive theory) to guide the identification of appropriate intervention methods and delivery strategies related to the objectives stated in step 2. In this step, using evidence from the literature review and focus groups, we identified appropriate theoretical methods and strategies for our stated objectives. Step 4: Developing program components and materials In step four, a description of the scope and content of the tool is outlined. The steering group (members of the National Cancer Survivorship initiative and Macmillan Cancer Support) and expert stakeholders provided guidance regarding the scope and content of the tool. A feasibility study was carried out to test the work-related guidance tool and to ensure it met the change objectives and practical strategies identified in Step 3. Step 5: Planning for programme adoption and implementation and step 6: creating an evaluation plan In step five, a plan for programme adoption and implementation was outlined; and in step six, a plan for evaluation was generated. Both step five and six are not in the scope of the current paper; they will only briefly be discussed in the results. Results Step 1: Needs assessment Literature review The academic literature is focused largely on return to work rates and risk factors known to hinder successful work resumption and/or work ability (for example, age, educational attainment, cancer stage, treatment and occupation type). Other studies have focused on the workplace concerns of those with/recovering from cancer. For example, apprehension over financial security [40]. A range of studies, however, have identified what could be done to facilitate return to work and work ability [9,[31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47]. These have reported that flexible working hours (for example, to attend medical appointments) and adjusted working arrangements (for example, to start their shift later in the day) can improve return to work and work ability [9,[45][46][47]. This was also supported by the lay literature, which frequently reported that employees have the right to request modifications to their working pattern or ask for other "reasonable" adjustments [48]. Other potential needs identified were related to information about the financial implications and the potential side effects (i.e. physical and emotional consequences) of cancer and its treatment on work [46]. A review of return-to-work intervention studies for breast cancer survivors found exercise and counselling associated with return to work [49]. Another review on returnto-work interventions of all cancer types reported that encouragement, education or advice about work or workrelated topics, vocational training (for example, learning self-management skills) and work adjustments had some evidence of effectiveness for return-to-work outcomes [25]. In sum, the working needs of those with or recovering from cancer were categorised into three key topic areas: 1) Health 2) Work 3) Finance For the work-related guidance tool review, few relevant articles were identified. This was because most existing tools were too specific (i.e. in terms of different health conditions) and did not focus on work-related issues. Similarly, no relevant tools were identified by rehabilitation experts and organisations. Table 1 summarises the participant characteristics. Data from the two focus groups and the interviews suggested three main themes relating to both positive and negative work experiences of those with or recovering from cancer, and their work-related needs. These work-related needs were identified as the most important information or support required by participants to make an informed choice about their work and return to work. The most important work-related needs of the participants are summarised in Table 2 along with a number of subthemes related to each main theme. The main themes identified were: health-related issues and their impact on work (for example, information on how the 'fit note' process is managed); workplace information and support (for example, workplace accommodations and flexible working hours to accommodate medical appointments and the side effects of treatment); and financial-related issues following diagnosis and treatment (for example, clarification on insurance policies and additional benefits). Focus group and interview data Participants discussed a range of factors for each main theme. For example, for work information and support they felt that, in order to aid the transition back to work, it was important to be updated on changes that may have occurred in the workplace during their absence (i.e. awareness of new information). They also discussed the importance of maintaining contact with co-workers during sickness absence. Some participants expressed concern over ambiguity surrounding what information their previous/current employer was allowed to disclose to a future employer and felt that more information was required regarding this matter (new employment disclosure). It was also important to participants that their company policies surrounding sickness absence, sick leave entitlement, sick pay arrangements and how much notice is required before returning to work were made clearer and more easily available. With regard to health, participants highlighted a need for better information provision from various parties regarding the possible impact of cancer and its treatment on work. They expressed particular concerns regarding information relating to understanding the nature of potential side effects (i.e. physical and/or psychological) and being enlightened as to when they may interfere with work (information provision). Finally, many participants indicated that they needed greater clarification regarding financial matters associated with cancer and its treatment. Some participants felt that there was a lot of uncertainty regarding how they were covered by their insurance policies and others felt that information was limited in terms of obtaining additional benefits, for example, to cover additional travel costs. Step 2: Identification of intended outcomes and performance objectives Based on the findings of the focus groups and literature review, the overall desired outcome for the intervention tool was defined as 'a self-management of work-related decisions'. Due to the differing occupations and work patterns of the intended users, it was clear that the guidance tool needed to be designed so that there was relevance to almost all employee groups. Therefore, the three key target behaviours for those with/recovering from cancer were 1) to take control over work-related issues; 2) to make informed decisions related to cancer and work issues; and 3) to enhance their return to work and/or work ability. As involvement of relevant stakeholders was vital for those with/recovering from cancer in achieving the three behaviour targets, we identified that we required those with/recovering from cancer to clearly communicate their needs to the stakeholders and to engage them in supporting their needs. It was therefore decided that the tool would need to consist of relevant questions that those with or recovering from cancer could ask their employer in order to promote dialogue. This way, an employer could provide a response (in the form of information, support or action) relevant to the individual, their circumstances and their job and work setting. The needs assessment also identified health care providers (general practitioners, oncologists, oncology nurses), unions, occupational health services, charities and benefit advisors as important stakeholders for employees making decisions about work, especially for those self-employed or those not returning back to work/back to the same workplace. Therefore, the work-related guidance tool would also contain relevant questions for such stakeholders. The next stage of the intervention mapping process was to specify the performance objectives. Using the above information, the research team listed the steps that would need to be taken in order to achieve the overall intended outcome. Next, the performance objectives for each of the desired behavioural outcomes were identified. These can be found in Table 3. To create a matrix of 'change objectives' , the main personal determinants (factors within the individual and in their direct control) and external determinants (factors that can directly influence the health behaviour or environmental conditions) of behaviour change for each performance objective were first operationalised. The three categories of determinants from the Theory of Planned Behaviour [50] were considered appropriate and were targeted. These are: attitude (for example, how positive the individual is to ask for support); social influence (for example. the subjective norms and social norms of receiving support from others); and self-efficacy (for example how confident the individual is to approach relevant stakeholders for support). Intention and knowledge were also identified as important determinants of the performance objectives. The tool aims to influence all these determinants but especially self-efficacy and knowledge. Selfmanagement intervention programmes have shown that changes in self-efficacy (a determinant based on Bandura's Social Cognitive Theory [51] are associated with changes in behaviour and health status for a range of chronic conditions e.g. [52]. The final step is to create a matrix with all the key information by mapping performance objectives (row) against determinants (column headings). The cells are then filled with what the target group should do and/or know and what should change in the environment in order for there to be a positive impact on each determinant so that the performance objective can be achieved. Step 3: Selecting a theory-based method and practical strategies Given the identified outcomes and objectives of the tool, empowerment was selected as the theoretical framework best suited to underpin tool design, in order to influence users' self-efficacy and knowledge. Empowerment has been defined as a means by which individuals gain a sense of control over their lives, particularly with regard to decision making [53]. Therefore, empowerment is a potential mechanism for increasing self-efficacy, as it enables an individual to feel competent and confident in their ability to perform self-management behaviour [54,55]. Furthermore, empowered patients are in a better position to gather knowledge and having knowledge is likely to increase empowerment. The objectives of empowerment interventions used in the workplace focus on skills and behaviour change by improving, for example, employees' action planning activities and selfefficacy. Self-management programmes for employees also focus on similar strategies see [56]. The research team subsequently identified practical strategies that are Know when to use the tool thought to influence the theoretical determinants using empowerment theory as well as other appropriate theoretical methods [33]. Theoretical methods and practical strategies are specified in Table 4. Step 4: Developing program components and materials The first phase in Step 4 was to decide the scope of the work-related guidance tool. To determine the structure of the work-related guidance tool, we carried out another literature review to identify existing empowerment tools and evaluate their content. The academic and grey literature were searched using a similar search strategy for identifying self-management tools to that outlined in Step 1. The results from this search identified a large degree of variability among existing measures of empowerment. For example, some tools require respondents to indicate their level of agreement towards a list of statements that draw upon concepts such as self-efficacy, perceived control, self-esteem and a sense of responsibility [57,58]. Other tools comprise a list of questions that aim to encourage patients to communicate with various healthcare professionals [59]. Therefore, whilst some measures are designed to determine an individual's current level of empowerment, others are designed to enhance perceptions of empowerment. Since a key aim of our tool was to empower those with or recovering from cancer to effectively manage their work or the return to work process, it seemed appropriate to develop a tool that consisted of 'empowering questions' to encourage individuals to become active communicators with key stakeholders such as healthcare professionals, employers and employment agencies. Using the information collated from the above steps of the intervention mapping protocol, the first draft of the tool was developed. It consisted of 43 questions that were divided into one of four categories to represent the stages of the cancer journey in relation to work: 1) Initial work issues and absence from work 2) Preparing to return to work 3) Returning to work 4) Not returning to work Individuals are given the tool by Macmillan Cancer Support staff and/or other support services or they can download from the Macmillan website. One-to-one discussion with Macmillan Cancer Support staff and information available via their website will be given to encourage implementation of intentions (i.e. to discuss work-related issues). A questionnaire format would be used to enhance the usability of the tool. Questions in the tool are tailored to encourage users to take a pro-active approach to obtain relevant information related to potential cancer and work issues (i.e. 'empowering' questions). Request information and support Goal setting Through questions posed in the tool, individuals can activate social support at work and from other key stakeholders. Questions will consist of goal setting questions e.g. How can we work together to make decisions about any changes to my job role and description? Each question in the tool will clearly indicate which stakeholder(s) are important to ask. Individualisation (TM) Active processing of information The guidelines in the tool will provide a clear journey for users to follow, but be flexible enough for them to choose which section applies to them. Written guidelines in the tool will emphasise the flexibility for the individual in choosing one or more important stakeholder to ask the question (from those indicated; and where more than one stakeholder has been identified) as relevant to their needs. Guided practice (SCT) and Enactment (SCT) The tool is portable in design so that users can take it with them to meetings with relevant stakeholders. Individuals can practice asking the questions beforehand. Using the tool frequently throughout the cancer journey should result in mastery experience. 3. Assess adequacy of information, resources and support related to work/return to work decisions about work Decisional balance (SCT) Feedback (ToL) Individuals write down the feedback given by stakeholders in order to identify obstacles and solutions and to assess which information given, support or resources offered will be most beneficial to them. Within each of these categories, the questions were organised into three broad themes: 1) Health 2) Finance 3) Work For each question in the tool, a list of seven stakeholders was provided (e.g. Oncology team, General Practitioner, Occupational Health) in order that respondents could state to whom each question should be asked. A sample of the tool is presented in Table 5. Next, it was necessary to determine precisely how the tool as should be delivered. An important element of the tool was that it should be self-led by those with/ recovering from cancer and could be used throughout their cancer journey as appropriate. The tool could be delivered to the target group by making it available to them at oncology clinics by oncology nurses and/or oncologists. This was a realistic approach as oncology clinics should be providing work-related information and support to cancer patients as part of their care (see National Cancer Survivorship Initiative website www.ncsi. org.uk). The tool could also be delivered by Macmillan Cancer Support nurses and support groups and on the Macmillan Cancer Support website as well as other websites such as on the National Cancer Survivorship Initiative website. As the premise for the tool was that not only should it be self-led requiring minimal input from experts, it should also be applicable to a variety of different cancer types, occupations and work settings, the above delivery strategies were pragmatic. The guidance for using the tool was then written taking into account the steps above. To ensure that the content of the tool met the intended change objectives, that it would empower the target audience, could be used without expert delivery, and that there was a clear rationale regarding to whom each question should be directed, a Delphi study was carried out. The Delphi method is a systematic technique which aims to engage a large number of experts (i.e. those who specialise in a particular field of interest or who have knowledge about a specific subject) in a process to obtain consensus of opinion or judgment on a topic where the required information is incomplete or scarcely available [60]. It is an iterative process based on the results of several questionnaire rating rounds. Responses are analysed and statistically summarised (usually into medians and upper and lower quartiles) and presented to the expert panel for further consideration [61]. In the present study, a two-round Delphi consensus method was used and the following expert groups were identified for this process: those with/recovering from cancer and in various stages of their cancer journey/return-to-work process; healthcare professionals relevant to cancer patients (e.g. General Practitioners, oncologists, nurses); occupational health staff, employers and employer representatives (e.g. human resources, line managers, trade unions); charities (e.g. Macmillan Cancer Support); benefit advisors (e.g. Job Centre Plus); and researchers with expertise in cancer and return to work issues (both national and international experts). These experts were recruited from a variety of different sources. Ethical approval was obtained from Loughborough University. One hundred and seventy-two 'experts' took part in the Delphi study. Participant characteristic are presented in Table 6. This was conducted online where each participant was asked to rate each question (on a nine-point Likert scale) the degree to which they thought that the question posed would help the target group make decisions related to work and well-being (1 = not at all, 9 = a great deal); and to whom the question should be asked (e.g. Oncology Team, Occupational Health, Line Manager, Human Resources) (multiple answers were permitted). Open textboxes were also provided for feedback on each question or section. Following feedback, one hundred and thirty nine participants completed the second round of the Delphi, responding to the questions that did not reach consensus in the first round. Data were analysed further to assess consensus between the experts. This resulted in 40 questions for the tool and a range of agreed stakeholders relevant for responding to each question. A feasibility study was then carried out with an independent group of participants who had or were recovering from cancer (n=38) and who were at various stages of the cancer journey. The purpose of the feasibility study was to assess whether the tool met the change objectives and potential strategies. Again, the participants were recruited from a variety of sources (excluding the National Health Service) such as local cancer support groups and networks, and through an advertisement placed in the local paper. Those expressing an interest were asked to complete a consent form and inclusion criteria form (must have been in employment at time of diagnosis; no history of psychiatric disorders). Those meeting the inclusion criteria were then sent a link to an online survey which asked participants to rate their self-efficacy and perceptions of empowerment regarding work decisions. After they had completed the survey, they were sent a hard copy of the tool via postal service. The tool was tested over a six week period after which participants were asked to complete the same online survey again and additional questions on their experience of using the tool, including how many questions they used, what responses they obtained and how useful each question was for their needs (process evaluation). Analyses of the data suggest that levels of self-efficacy and feelings of empowerment improved over the six week period. However, there was no control group and therefore these results must be interpreted with caution. Findings from the feasibility study and its evaluation will be presented in a separate paper. Step 5: Planning for program adoption and implementation Once the tool was finalised, the next stage was to create a plan for the adoption and implementation of the tool amongst the target group. This meant developing an implementation plan and training for healthcare professionals, support groups and employers who could direct those with/recovering from cancer to the tool. This is currently being led by Macmillan Cancer Support who have adopted and implemented the following strategies: 1) 10,000 printed copies of the tool in 2011 distributed to Macmillan Cancer Support services, vocational rehabilitation support services and other support services for dissemination to patients and service users. Printing, distribution and dissemination of the tool is on-going; 2) provision of downloadable PDF version of the tool through their website and through the National Cancer Survivorship website; 3) printed copies of the tool are made available in a 'toolbox' that employers can order for free from the Macmillan website 4) training HR and line managers in how to respond to the use of the tool and how to provide support to employees diagnosed with cancer. Through the Department of Health and employer organisations such as Chartered Institute for Personnel Development (CIPD) Macmillan have raised awareness of the tool and mobilised human resources personnel, line managers and employers for training on the usage of the tool. The tool has also been evaluated by an independent research team as part of the evaluation of the cancer vocational rehabilitation pilots (by the University College of London on behalf of Macmillan Cancer Support and the National Cancer Survivorship Initiative). Step 6: Creating an evaluation plan The effectiveness of the tool will be evaluated in a randomised control trial (RCT). Patients who have been diagnosed with cancer and are in employment at the time of diagnosis will be invited to participate via oncology clinics and will be randomised into a control (no intervention) group and a intervention group who will be asked to use the tool for twelve months. The primary outcome measure is defined as: duration of sick leave in calendar from first day of sick leave to full or partial return to work. Secondary outcome measures include quality of working life; general quality of life; psychological well-being; job self-efficacy and perceptions of empowerment. Questions to evaluate behavioural determinants will also be included. Assessments will take place at baseline, and after three, six, nine and twelve months. A process evaluation will also be conducted to assess satisfaction and utility of the tool. A power calculation is yet to be determined as this may need to be conducted separately for different types of cancer. It might be that we conduct an RCT in only four (or fewer) types of common cancers. Ethical approval will be sought from the NHS Trust ethics committee. Details about the evaluation of the tool will be presented in a separate paper. Discussion In this report, we have described the development of a work-related guidance tool using intervention mapping. To our knowledge, this is the first study to use IM to develop a tool as a self-led intervention itself, rather than an intervention programme per se. IM has been proven to be a useful protocol for developing not only health promotion programmes but also RTW programmes [34,56,62]. We found IM to be a useful planning template as it enabled a tool to be developed that is theoretically grounded and evidence-based, with independent, systematic involvement from key experts (the Delphi study and feasibility study) to ensure the tool met all its objectives. This is one of the key strengths of using IM; by using conceptual models, we were able to construct a work-related guidance tool that is pragmatic and credibly tailored to the needs of a specific population. A key strength of our work-related guidance tool is that it is designed to be a self-led intervention that is underpinned by a theoretical framework. Therefore, there is no training required for those with/recovering from cancer in using the tool. This means that very little input is necessary by health care professionals, who already have a demanding workload. In terms of raising awareness of the tool among healthcare professionals, employers and those with/recovering from cancer, as outlined in step five, much of this work is currently underway by Macmillan Cancer Support. Although the number of RTW interventions is increasing, none have been based on a self-led tool that empowers the patients by encouraging them to take the lead and identify their work-related capabilities and limitations in relation to their diagnosis, prognosis and treatment. In addition, the tool promotes consultation and discussion with a range of different healthcare professionals, employers, employment agencies and support services who are all involved in the RTW of an individual diagnosed and treated for cancer. This is a unique aspect of our tool which is made distinctive by the involvement of these key stakeholders in confirming the appropriateness and relevance of each question in the tool in the Delphi consensus process. To our knowledge, no similar technique has been used for developing RTW interventions for those affected by cancer. Our literature review and focus group data revealed these stakeholders to play a key role in facilitating or creating an obstacle for making a timely return to work and in obtaining appropriate workplace support. Our focus group data further reflect the current literature on the obstacles and facilitators to workplace accommodations [32]. It is envisaged that the tool will address some of the problems raised in the literature with regard to reducing sickness absence, minimising the risk of unnecessary unemployment and financial problems; job discrimination and lack of emotional and practical support from line managers, employers, occupational health services and healthcare services [4,32,46]. A possible limitation to using IM is that it is time consuming. The process described in this paper took nine months to complete. It required a full time researcher and six weeks full-time work by the research team. This is a similar experience reported by other researchers using IM [56,62]. Due to both funding and time constraints, it was not possible to create a complete description of the adoption, implementation (step 5) and evaluation plan for the work-related guidance tool. Although Macmillan Cancer Support and the NCSI have developed an adoption plan for the tool, this may not be rigorously assessed. This is a particular concern as the contribution of IM in using theoretical knowledge and strategies to underpin the tool may not be evaluated and documented. Another issue of concern is whether we are able to test the tool in an RCT as Macmillan Cancer Support and the NCSI have already made the tool available through channels. Therefore, it may be difficult to recruit a control group without potential exposure to the tool and contamination by such exposure. These issues will have to be resolved. A possible weakness of the study is that although we included a wide range of stakeholders from a variety of backgrounds and experiences, the employers involved were mainly from medium to large organisations. Therefore the tool may not capture the perspectives of small organisations. However, a representative proportion of our participants with/recovering from cancer either worked for a small organisation or were selfemployed and therefore their perspectives have contributed to the development and refinement of the tool. Conclusions In this study we describe the development of a selfmanagement work-related guidance tool for those with/ recovering from cancer, using the Intervention Mapping framework. The tool has already been implemented by Macmillan Cancer Support but the next step is to test the work-related guidance in an RCT.
v3-fos-license
2019-02-16T14:33:36.185Z
2017-02-28T00:00:00.000
59932118
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.enggjournals.com/ijet/docs/IJET17-09-01-404.pdf", "pdf_hash": "254b924e1e0bc9c8e74fbed9634e401bb97cb4d8", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41694", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "61005568047f22377a567f72d3ccee7c7b50b2c6", "year": 2017 }
pes2o/s2orc
Hardware Implementation of Driver Fatigue Detection System -- An efficient hardware for detecting driver Fatigue is implemented using non-intrusive vision based approach and DM3730 Processor. In this system a Logitech USB camera is fixed at a distance of 30cm which points towards driver’s face and monitors face and eyes to detect driver fatigue. The entire system works on Linux operating system and uses DM3730 processor as hardware. An image processing algorithm is developed to estimate whether eyes are open or closed and fatigue is estimated using PERCLOS method. Eye blink rate for normal human being is 12 times per minute. In this algorithm if eyes are closed for 15 consecutive frames in a minute or if PERCLOS > 80% than system issues warning to stop the vehicle. The system is tested on 45 different persons i.e., 15 women, 15 men and 15 persons wearing spectacles and the detection rate is 99.2%. The time taken by the system to detect driver fatigue and issuing warning is less than 5ms. The entire system can be easily placed inside vehicle as hardware used is small in size and easily implementable without distracting the driver. It is observed that driver fatigue first appears in eyes and mouth, hence DFD system based on image and computer vision involves extraction of facial features. Face recognition has been used for many applications in real time such as face recognition systems used at airports to provide high level security, Human Computer Interface (HCI) based on facial expression to control mouse and the keyboard. Automatic face extraction leads to criminal identification, security, surveillance system and smart phone applications (Mohammed Saaidia, and Sylvie Lelandais, 2007). Research shows that, the most promising face recognition methods are Eigen Vectors, Principal Component Analysis (PCA), Skin Segmentation, Artificial Neural Network (ANN), and Template Matching. [8] D. Fatigue Detection Systems Development of technology to detect fatigue is a challenging task faced by most of the transportation company. Driving vehicle is difficult task on road because driver should be capable of taking correct decisions on road under different conditions. Recent statistics show that fatigue is the main reason for 19% fatal accidents and 27% of injury road. Driving for long hours, inappropriate sleep makes the driver feel fatigue which in turn makes them sleepy (Elzohairy, Y.,2008). The research shows that, cause of an accident falls into three categories: human, vehicle and environmental conditions. In majority of accidents, driver contribution to accidents is 93%, vehicle contribution is 13% and environmental conditions contribute about 34%. In few cases, the cause of an accident can be contribution of more than one factor. [9] Many researchers worked on techniques to find driver fatigue and alert them to avoid accidents on road. The reduction in crashes related to driver fatigue is the need to be addressed with some innovative concepts and methodologies. Fatigue driver exhibits certain behaviours like eye gaze, head movement, pupil movement and facial expression which are observable. Hartley et al., proposed classification of technologies for driver fatigue detection system into following categories: i. Mathematical models of dynamic alertness. ii. Vehicle based techniques to measure SWM, speed, and lane deviation. iii. Real time invasive and Non-invasive fatigue detection system. In 1992, researcher proposed an intrusive system to detect driver fatigue monitoring system. Intrusive system provides physical contact with driver. The level of invasiveness is measured based on interaction between driver detection systems. Non-intrusive systems provide no contact with the driver. Experimental result show that, intrusive systems provide better results compared to non-intrusive system. In 1997 & 2001, researchers proposed driver fatigue measurement using physiological parameters. The system measured ECG, EEG, EOG, Skin temperature, orientation, O 2 level in the blood saturation. These physiological parameters are acquired directly; hence systems achieve better detection results. The problems with this technique are, acquiring large number of signals and processing makes it complex and extreme invasiveness. Monitoring above mentioned signals require large practical devices which cannot be easily placed inside the vehicle. Liu, Schreiner & Dingus in 1990, used steering wheel metrics which measure behaviour of steering using 'σ' (standard deviation) wheel angle (Liu, Y.C.,et.al.,1990). [5] Jarek Krajewski, David Sommer, et.al., proposed fatigue detection depending on steering measured using signal processing procedure for extracting features and computes 3 features time, frequency and state space values. The method yields a recognition rate of 86.1% which is highest detection rate achieved in steering behaviour based driver fatigue detection (Jarek Krajewski, David Sommer,et.al.,2007). [10] In 2005, researchers proposed fatigue estimation using driver feedback, the method is non-intrusive but makes driver tiresome and annoying (Artaud et.al, Mabbott et.al, 2005) [11]. In 2006, Martin Gallagher used non-intrusive eye based & pressure sensor for fatigue detection. They used pressure sensor circuit along with ADuC8031, PC, Camera. The success rate was 80% , drawback was it takes 8secs to process a frame and does not work on occluded face, different illumination condition (Martin Gallagher,2006-07). [12] In 2010, researchers made use of PERCLOS to detect driver fatigue, which detects eyes open, semi open, close. But time taken was more and required high grade PC to process the frames. In 2011, an author used Haar features to detect driver fatigue based on eye blink, the method works on rotated images with a success rate of 96% but fails to warn the driver and testing is done on computer. II. FLOW CHART FOR DRIVER FATIGUE DETECTION SYSTEM Driver fatigue detection system flowchart is shown in Figure 1. Fatigue detection system processes real time video stream to identify drivers' different fatigue level by detecting the frames in which eyes are open/closed. As shown in Figure 1, as soon as the board power is ON, firstly it will check for all the connected peripherals working condition such as keyboard, mouse, camera, GSM module and speakers. If anyone of this is not working properly, then the board is reset using RESET button for rebooting. After successful verification of all peripherals, Camera starts capturing video stream. For the video stream, the system checks for lighting. For poor lighting condition, a set of 50 LED's can be used to illuminate driver's face. This set of 50 LED's as source of illumination for testing purpose is used in the Laboratory and is fixed such that it does not illuminate the drivers face directly. If the light source directly falls on drivers face than it would be difficult for the driver to drive the vehicle due to reflection. Hence, it is used only for testing purpose. The captured video stream is converted into frames, to perform image processing task to estimate driver's fatigue based on Eye blink rate and Yawning. The proposed image processing algorithm for Fatigue estimation is discussed in next section. The driver fatigue is estimated using Eye blink rate, the warning is issued to the driver, if still driver fails to stop the vehicle or take necessary step, than an SMS alert will be sent to the registered number. III. REAL TIME IMPLEMENTATION OF DRIVER FATIGUE DETECTION SYSTEM Real time HPLP detection system is developed using image processing where driver's face is recorded through camera continuously. Most of the image processing based system depend on intensity changes, hence it is required that background should not contain any object with strong intensity changes. The camera connected may capture highly reflective object behind the driver's seat as an eyes. The design of the prototype is developed assuming that inside the vehicle there is no direct light and background is uniform. As per the requirement, when the ambient light is poor (night time), a light source must be present to compensate the effect of poor light. Since it is a prototype, 50 LED's light source is used which illuminates the driver's face during poor lighting condition. This light source is fixed such that it will not illuminate driver's eyes directly. If the source of light is not taken into consideration, the camera with lights can be used for capturing the driver's image which will illuminate the image. The Fatigue detection system consists of Logitech Pro 9000 USB camera to capture driver's face. The camera is placed in front of driver's face, approximately 25cm away from the face. Position of the camera is such that, it must meet the following criteria i. Majority of the captured frame should contain driver's face ii. Drivers face should be at the centre of the frame. The above discussed two criteria reduce the complexity of identifying face and eye certain extent. Figure 2 shows the prototype of the driver fatigue detection system. In real time the entire setup is tested by placing this system in a vehicle as shown in Figure 2. The driver fatigue detection system is developed and tested on OpenSource Beagleboard Xm hardware. The BeagleBoard-Xm has a faster CPU core (clocked at 1GHz ), RAM (512MB ), onboard Ethernet jack and 4 port USB hub. Figure 3 shows the driver fatigue detection system implementation The Logitech USB camera is connected to one of the USB port to capture images at the rate of 30fps. The camera is fixed in front of the driver such that it continuously captures the driver image. The captured video is read in the form of images and is processed to identify weather the driver eyes are open/close. The entire fatigue detection algorithm is developed using OpenCV (Open Source Computer Vision) library and the coding is done using Python programming language. All the softwares required to develop driver fatigue detection system were installed on to the 8GB memory card after installing Ubuntu 11.10 Operating System. A. Specifications The process is continued and each frame will be checked to detect the face and eyes. If the driver eyes are closed consecutively for 15frames than the driver will be alerted through speakers which are connected to the audio jack of the Beagleboard Xm. IV. RESULTS The complete set up is made in the laboratory and proposed algorithm is tested on four (GTAV, Face Expression, MathWorks Video and VITS) database images and live testing is also performed on 45 different persons under different illumination conditions. The setup is shown in figure below. The results of the proposed algorithm are shown in figure, under different illumination conditions different head positions and for persons with and without glasses. The images used for testing the proposed algorithm are tabulated in Table I The overall success rate of the proposed system is calculated using equation (1) Table 5.3. Figure 4. Simulation result of Open Eye Simulation result of Closed Eye V. CONCLUSION In this paper a new system is proposed to estimate driver fatigue using image processing technique. Driver fatigue is estimated based on eye blink rate and is calculated using PERCLOS method. If the eyes are closed for 15 frames or if PERCLOS is >80% than the driver is entering into drowsiness and driver is alerted to stop the vehicle. The face and eye are detected using Haar Cascade Classifier. Eye Open/Close is estimated using contours and counting the number of white pixels. The entire set up is made in the laboratory and tested on GTAV, Mathworks Video, Face expression and VITS Database images. The system is tested on a video and the overall detection rate is 97.7%. The system is also tested Live and the overall detection rate is 99.7%. The system is developed on Beagleboard Xm hardware on Linux Operating System and the entire image processing code is developed using Python programming language. The developed algorithm works on different illumination, head rotation and persons wearing glasses. The time taken by the algorithm to estimate driver fatigue is <5ms on 1GHz processor.
v3-fos-license
2016-05-17T21:52:17.140Z
2013-08-06T00:00:00.000
9259680
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/ije/2013/971724.pdf", "pdf_hash": "fc72cc0139189411368ff026e1bf42e96068a9f8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41695", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ff2c7f03b0063fc168caff151c9d6b885220eb56", "year": 2013 }
pes2o/s2orc
Effect of a High Protein Weight Loss Diet on Weight, High-Sensitivity C-Reactive Protein, and Cardiovascular Risk among Overweight and Obese Women: A Parallel Clinical Trial Studies regarding the effects of high protein (HP) diets on cardiovascular (CVD) risk factors have reported contradictory results. We aimed to determine the effects of an HP diet on CVD risk factors and high-sensitivity C-reactive protein (hs-CRP) among overweight and obese women. In this randomized controlled trial, we recruited 60 overweight and obese women, aged 20–65, into an HP or energy-restricted control diet for three months (protein, carbohydrate, and fat: 25%, 45%, and 30% versus 15%, 55%, and 30%, resp.). Total protein was divided between animal and plant sources in a 1 : 1 ratio, and animal sources were distributed equally between meats and dairy products. Fasting blood samples, hs-CRP, lipid profile, systolic and diastolic blood pressure, and anthropometric measurements were assessed using standard guidelines. Percent change was significantly different between the two diet groups for weight (standard protein (SP): −3.90 ± 0.26 versus HP: −6.10 ± 0.34%; P < 0.0001, resp.) and waist circumference (SP: −3.03 ± 0.21 versus HP: −5.06 ± 0.28%; P < 0.0001, resp.). Percent change of fasting blood glucose (FBG) substantially decreased in the control group compared to the HP group (−9.13 ± 0.67 versus −4.93 ± 1.4%; P = 0.01, resp.). Total cholesterol, systolic blood pressure (SBP), and diastolic blood pressure (DBP) decreased both in the HP and in the control diet groups (P = 0.06, P = 0.07, and P = 0.09, resp.); however, the results were marginally significant. Serum levels of hs-CRP were reduced both in the control (−0.08 ± 0.11%, P = 0.06) and in the high protein groups (−0.04 ± 0.09%, P = 0.06). The energy-restricted HP diet resulted in more beneficial effects on weight loss and reduction of waist circumference. CVD risk factors may improve with HP diets among overweight and obese women. When using isoenergetic weight loss diets, total cholesterol, hs-CRP, and SBP were marginally significantly reduced, independent of dietary protein content. This trial is registered with ClinicalTrials.gov NCT01763528. Introduction Obesity is a chronic disease that is influenced by an interaction between both genetic and environmental factors [1,2]. Obesity has emerged as one of the greatest public health problems in the last century [3] and is a leading cause of many other chronic diseases, including hypertension, dyslipidemia, inflammation, type 2 diabetes, cancer, and cardiovascular disease (CVD) [4,5]. In parallel with the development of obesity, production of adipose tissue derived proteins, such as C-reactive protein (CRP), is usually increased. Elevation of CRP might lead to insulin resistance and CVD [6]. The worldwide prevalence of obesity is rising in developed and developing countries [2], both in adults and in adolescents [1,7]. Recent research suggests that prevalence of obesity is increasing in Iran, especially in women [5,8]. Moderate weight loss diets, those leading to reduction of 5-10% in body weight, have beneficial effects on CVD risk [9]. According to some investigations, high protein, calorierestricted diets enhance weight loss, by producing more satiety and reduced energy intake [10,11] as well as decreased loss of energy expenditure and greater thermogenesis [12,13]. Many studies have compared the effects of high protein (HP) diets on glycemic control, lipid profiles, and weight loss to other types of calorie-restricted diets [10,11,14,15]. However, 2 International Journal of Endocrinology results have been contradictory. Some studies have suggested that HP weight loss diets have more capacity to enhance the lipid profile compared with other types of weight loss control groups [14,16]. In contrast, other studies showed similar results when comparing high protein to normal protein diets [11,15]. Other research suggests more weight reduction through HP diets both in the short term [15] and in the long term [17]. Several studies have not shown differential effects of diet composition (i.e., protein, carbohydrate, and fat) on weight loss [16,18]. Fewer studies have investigated the effect of HP diets on inflammatory factors like CRP [16,19]. Additionally, different proteins are likely to have varied effects [16] since consumption of high protein from animal sources, especially red meat, might lead to insulin resistance, bone loss [16,19,20], and hypertension [20]. The source of protein is important, and a mixture of animal and plant sources may have enhanced benefits. To our knowledge, recent studies have not evaluated effects of the types of protein present in high protein diets. Furthermore, examining protein intake derived equally from dairy products and meat sources has not been considered in prior studies. Rather, most high protein weight loss diets have focused on animal protein with little attention to vegetable protein intake. Given that few high protein trials have considered the type of protein intake, we examined the effect of an HP weight loss diet composed of 50% plant and 50% animal sources of protein on body weight and cardiovascular risk. Subjects. A convenience sample of sixty-three overweight and obese women, referred to Isfahan Nutrition Clinic, was recruited to participate in this randomized dietary trial between February 2011 and July 2012. Then, simple random sampling was used to randomly allocate subjects into two groups. According to the formula, 17 subjects were needed in each group for adequate power. Subjects were included if they were between 20 and 65 years of age and had a body mass index (BMI or kg/m 2 ) of >25, were nonsmokers, and had no history of renal, liver, and metabolic diseases or type 1 or 2 diabetes. Women were excluded if they had gastrointestinal, respiratory, cardiovascular, metabolic, liver, and renal diseases, had macroalbuminuria, or were pregnant or lactating. The study was explained to each subject, after which written informed consent was obtained from all participants. The study was approved by the Research Council and Ethics Committee of the Food Security Research Center, Isfahan University of Medical Science, Isfahan, Iran. This clinical trial is registered with ClinicalTrials.gov as number NCT01763528. Study Design. Women were randomly assigned to one of the isocaloric energy-restricted diets (a 200-500 kcal reduction of total energy) for three months according to a parallel design while matched on age, BMI, and medication use. Participants were not aware of their dietary group assignment at baseline (i.e., consumption of the high protein diet or standard protein diet). The HP intervention ( = 30) was a weight loss diet with 25% of energy from protein, 45% from carbohydrate, and 30% of energy from fat. The control group ( = 30) followed a weight loss diet with 15%, 55%, and 30% energy from protein, carbohydrate, and fat, respectively. The total amount of protein was divided between animal and plant sources in a 1 : 1 ratio. Also, animal sources were derived half from meats (e.g., red meat, fish, poultry, egg, and other meat products) and half from dairy products (including milk and yogurt). A dietician provided participants with individual regimen consultation and instructions on dietary requirements at the start and once a month throughout the study. No adverse effects were reported by any participant during the study. Participants completed three-day consecutive food records before each visit. Energy and macronutrient intake was analyzed by Nutritionist IV software. A 24-hour physical activity record (in MET) was conducted at the beginning and the end of the study. We used the Maroni formula along with urinary urea nitrogen (UUN), as a marker of protein intake, to assess adherence to the prescribed diets. Assessment of Anthropometric Measurements and Blood Pressure. Every two weeks, participants were weighed to the nearest 100 grams. Participants were weighed wearing light clothing and without shoes after fasting overnight. At baseline, height was measured using a measuring tape after removal of the participant's shoes. Body mass index (BMI) was calculated by weight (kg)/height (m 2 ). At baseline, on a monthly basis and at the end of the study, waist circumference was measured to the nearest 0.1 cm over light clothing at the midpoint between the anterior superior iliac crest and the lower rib at the maximum girth, using nonstretchable tape, without any pressure to the body surface. After 15 minutes of rest, we measured participants' blood pressure three times in the sitting position and recorded the average of the three measurements. Systolic blood pressure was defined as the appearance of the first sound, and diastolic blood pressure was defined as the disappearance of the sound (Korotkoff phase 5). Blood pressure was measured at week 0 (baseline) and week 12. Assessment of Biochemical Measurements. Collection of total 24-hour urine output commenced at 07:00 (except for the first morning urine) at weeks 0 and 12. According to standard protocol, fasting blood samples were collected at baseline and week 12 while subjects were sitting. Samples were centrifuged within 30-45 minutes of collection for 10 minutes at 500 ×g and at 40 ∘ C. Samples were analyzed using an autoanalyzer (Selectra 2; Vital Scientific, Spankeren, The Netherlands). HDL cholesterol, LDL-c, fasting glucose, and total cholesterol were measured using an enzymatic kit. Triglyceride was measured with glutathione oxidase. Highsensitivity C-reactive protein (hs-CRP) was measured using ELISA and an enzymatic kit. Urinary urea nitrogen (UUN) was determined based on the assessment of protein intake by using the Maroni formula: (protein intake (gr/day) = UUN + 0.031 × weight (kg) × 6.25). Statistical Analysis. Baseline and end values of cardiovascular risk factors including weight, waist circumference, LDL-c, HDL-c, total cholesterol, fasting blood glucose (FBG), triglyceride (TG), systolic blood pressure (SBP) and diastolic blood pressure (DBP), and hs-CRP in the high protein diet and control diet groups were compared using paired -tests. Percent change in cardiovascular risk factors and hs-CRP in the high protein diet and control diet groups were compared using -tests. Analysis of covariance (ANCOVA) was used to adjust the effects of age, BMI, and medication use on CVD risk. We used SPSS software (version 16.0; SPSS Inc., Chicago IL, USA) for the statistical analyses. < 0.05 was considered statistically significant. Results Of the initial 63 participants in the trial, 3 dropped out due to nonparticipation in the first regimen consultation ( = 3). Thus, the study was completed by 60 participants ( = 30 subjects for the control group and = 30 subjects for the high protein group) ( Figure 1). The mean (±SD) of baseline BMI was 26.8 ± 1.1 versus 27.2 ± 1.2 kg/m 2 in the control and high protein groups, respectively ( = 0.10). The mean age was 40.0 ± 2.4 and 44.1 ± 3.1 years in the control and in high protein groups, respectively ( = 0.09). Participant adherence to the diets was assessed by analysis of 24-hour food records. We also assessed compliance to the high protein diet by using the Maroni formula, indicating that participants had relatively good compliance. Medication use among the subjects included Algomed ( = 20 in both groups), venustate ( = 6 in both groups), SlimQuick ( = 22 in both groups), and Dine Bran ( = 26 in both groups). Physical activity level was not significant at baseline (21.3±4.6 in control group and 23.0±4.9 MET⋅h/day in HP group; = 0.34) and at the end of study (26.1 ± 5.5 in control group and 25.2 ± 5.9 MET⋅h/day in HP group; = 0.44). This was not significantly different even during the study. Baseline and end values of CVD risk factors and hs-CRP in the high protein and control diet groups are shown in Table 1. Most of the baseline variables were not significantly different except for baseline weight and SBP ( = 0.01). Baseline mean (±SD) for weight was 67.93±1.5 kg and 74.13± 1.00 kg in the control and high protein groups, respectively ( = 0.01). After three months of the intervention, weight, waist circumference, TG, FBG, and SBP were significantly reduced in both the control and high protein diet groups. Total cholesterol was reduced in the control group, which was marginally significant (−6.4 mg/dL, = 0.07). However, this reduction was not significant in the high protein diet group (−2.9 mg/dL, = 0.24). Serum level of LDL-c was substantially changed in the control group (−4.77 mg/dL, = 0.04). However, this parameter was not significantly reduced in high protein group (−2.7 mg/dL, = 0.14). We did not observe any significant change in serum level of HDL-c either in the control or high protein group ( = 0.32 and = 0.45, resp.). We observed a marginally significant reduction in DBP in the control group (−4.077 ± 2.3 mg/dL, = 0.05). This reduction was not significant in the high protein group (−1.2 ± 1.8 mg/dL, = 0.31). Serum level of hs-CRP reduced in the control group after three months of the intervention (2.91 ± 1.81 mg/dL at baseline and 2.83 ± 1.75 mg/dL at the end of trial, = 0.05), which was marginally significant. Percent changes of cardiovascular risk factors and serum level of hs-CRP in the high protein diet and control groups are presented in Table 2. The percent change in weight and waist circumference was greater in high protein group compared to control group (weight: −6.10 ± 0.34 and −3.9 ± 0.26 kg, = 0.0001 in high protein and control groups, resp.; waist circumference: −5.06 ± 0.28 and −3.03 ± 0.21 cm in high protein and control groups, resp.). Percent change for the serum level of TG, LDL-c, HDL-c, and DBP was not significantly different in the control and high protein groups in crude models or after adjustment for potential confounders. We observed a significant reduction in FBG ( = 0.02) and marginally significant reductions in total cholesterol ( = 0.09), SBP ( = 0.08), and hs-CRP ( = 0.07) in the control compared to high protein diet group, even after adjustment for potential confounders. There was Values are mean ± SEM. 2 These values resulted from -tests between the baseline values and alsotests between the end values separately; < 0.05 was considered statistically significant. 3 These values resulted from paired -test between baseline and end values in each group individually; < 0.05 was considered statistically significant. Values are mean ± SEM. 2 These values resulted from -tests between the control diet group and the high protein diet group; < 0.05 was considered statistically significant. no significant difference between high protein diet group and control group in serum levels of HDL-c, after adjustment for potential confounders (0.3 ± 0.42, = 0.68). Serum levels of hs-CRP were reduced in both the control group (−0.08±0.11, = 0.06) and high protein group (−0.04 ± 0.09, = 0.06). International Journal of Endocrinology 5 Values are mean ± SEM. 2 These values resulted from -tests between the control diet group and the high protein diet group; < 0.05 was considered statistically significant. Forty-one percent and 16% of the patients in HP group lost more than 5% and 10% of their baseline weight, respectively. However, among the controls, 29% and 8% lost more than 5% and 10% of their baseline weight, respectively. Table 3 shows the dietary intake of both groups in detail during the study. Discussion Findings of the present study suggest that a high protein low-fat diet had more positive effects on weight and waist circumference reduction compared to a standard high protein diet. However, the control diet conferred more benefits on cardiovascular disease risk factors compared to the high protein diet. To the best of our knowledge, this is the first study to evaluate the type of protein composition and to assess diets with protein sources from all animal types (both low-fat dairy products and different types of meats) as well as plant sources. According to our study, the energy-restricted HP diet had more beneficial effects on weight loss and waist circumference compared to the control diet. Numerous studies have demonstrated favorable effects of HP diets on weight loss [15,17,19,21]. In this parallel randomized trial among overweight and obese individuals, weight loss in subjects who consumed a high protein low-fat diet (34% protein) was equal to that of subjects who consumed a standard protein high-fat diet (18% protein) [21]. While in another study, an HP meal plan showed more beneficial effects on fat mass reduction among obese adults but found no differences in weight loss between HP and standard protein (SP) groups after 12 weeks [11]. The energy-restricted HP diet (33% protein) combined with resistance exercise for 16 weeks could have a larger impact on weight loss and result in a larger reduction in waist circumference compared with an SP diet (19% protein) among overweight and obese patients with type 2 diabetes [15]. Similar results have been observed in healthy obese individuals [22,23] and hyperinsulinemic men [24]. Greater weight reduction following an HP energy-restricted diet compared with a high carbohydrate diet may occur due to increasing postprandial thermogenesis [12], as postprandial thermogenesis has been correlated with content of protein in a meal [25,26]. Protein content in the diet can improve appetite and hunger motivation [27]. A larger protein content in the diet can reduce body weight by mediating more satiety and energy intake reduction [10,11]. One study demonstrated that consumption of >1.6 gr/kg/day of protein may increase the hypertrophic response to resistance exercise and increase weight reduction maintenance [28]. As obesity, particularly central obesity, is an important risk factor associated with cardiovascular disease and metabolic syndrome [29], an HP weight loss diet is a promising strategy for ameliorating risk factors for CVD. Although some prior published research by other investigators has found beneficial effects of high protein diets on lipid profiles [14,15,30,31], we did not observe these results among the overweight and obese women in the present study. Percent change of TG concentration decreased more in the HP diet group compared with the control diet group in our study but was nonsignificant. Previous research has shown that HP diets can improve the lipid profile, independent of weight loss [32]. Another study showed that consumption of an HP diet with 33% protein for 16 weeks reduced TG, TC, and LDL-c but showed no differences compared to the control diet [15]. In our study, percent change in total cholesterol marginally significantly decreased in both the HP and control groups with greater reduction in the control group. Other research has shown that the HP diet has had beneficial effects in reducing LDL-c, TC, and TG and in increasing HDL cholesterol after 64 weeks of weight loss [17]. Also, it has been suggested that the HP content of the diet may result in a greater improvement of TG level because of the diet's lower carbohydrate content [32]. In addition, another study of diabetics reported greater improvement in the lipid profile (TC, TG, LDL-c, and HDL-c) among diabetic patients who adhered to an HP diet [30,31]. In contrast, others have found that improvement in the lipid profile can occur with weight loss in the absence of dietary protein sources [19]. Noakes et al. showed that markers of CVD risk factors were favorably affected by a weight loss diet with no difference between the HP and control diet groups, except for the TG level which reduced more from the HP diet [16]. Martínez et al. revealed that low content of carbohydrate in the HP diet can lead to reduced VLDL TG production [33]. In contrast to the findings of the present study, the study by Wolfe and Giovannetti reported greater reductions in TC and LDL-c levels [32,34], and a greater increase in HDL cholesterol [32] has been observed, with no change in TG in response to higher quantities of dietary protein. In our study, the baseline of FBG in the HP diet was substantially higher than baseline of FBG in the control group. Percent change of FBG was significantly reduced both in the HP diet and in the control diet groups, with greater reduction in the control group after adjustment for potential confounders. In a parallel trial which was conducted in overweight and obese hyperinsulinemic individuals, the glucose response reduced 6.8% more in subjects who consumed an HP diet (27% protein) compared with the SP group (16% protein) after 16 weeks of a weight loss program [20]. Plasma glucose concentration was significantly improved in both HP and SP groups, with no difference between the two groups [15]. Similar results have been achieved for long-term periods of weight loss among healthy obese women [17]. A significant difference was observed in FBG after 12 weeks of HP meal replacement compared to the SP meal [11]. In our investigation, we were unable to find substantial effects of the HP diet on blood pressure, although marginally significantly reductions in SBP were observed in both diet groups. In another study, HP content had no effect on SBP and DBP in obese hyperinsulinemic patients [20]. In contrast, some previous studies reported lower blood pressure after weight loss, independent of dietary protein content [15,19]. More research is needed to clarify the impact of HP diets on blood pressure and FBG concentration. Our results suggest marginally significantly improvements in hs-CRP among individuals who consumed the HP diet. CRP is a strong predictor of cardiovascular disease which improves following weight loss and reduction of insulin sensitivity [19]. In several studies, CRP concentration has been shown to decrease with weight reduction, independent of dietary composition [16,17,19]. Additional research is needed to investigate the influence of HP diets on changes in CRP. We observed more improvements in CVD risk factors in the control diet compared with the HP diet. Although we distributed the additional protein between plant and animal sources, the animal protein sources were higher in the HP group compared with the control group, such that it may have unfavourably affected some CVD risk factors in the HP group. Although our study suggested that there were improvements in weight among HP groups, the effect of HP diet on other CVD risk factors is not clear. Therefore, more research is needed to more closely evaluate all CVD and other chronic disease outcomes after adherence to an HP diet. The present study had several strengths including the fact that we examined the effects of a mixture of animal and plant sources of protein. Animal sources, particularly red meat, may lead to CVD because of their saturated fat [16,20]. Also, we were interested in low fat dairy products as animal sources, which may lead to attenuation of bone loss [35], an important concern in postmenopausal women. We controlled some important potential confounders which may affect CVD risk factors. Additionally, use of the Maroni formula in conjunction with the analysis of dietary intake aided in assessing protein intake. One limitation is that we were not capable of blinding the dietitian because we were using a dietary intervention. Also, as the trial was conducted among only women, we cannot generalize the results to the general population. The study follow-up period was relatively short, only three months. Given the varied effects of dietary interventions depending on the intervention duration, additional studies with longer follow-up periods are needed. Longitudinal dietary interventions are important in order to gain a better understanding of long-term diet adherence and more precise estimates of the effects. However, difficulties such as budget limitations and lower compliance of participants in longitudinal dietary trials may provide challenges. To further assess the importance of HP diets on CVD risk factors, further research should be conducted using varied proportions of protein, and research should be conducted in different populations with longer intervention periods. Participants did not mention any specific adverse events in the present study. This might be due to the kinds of protein sources in our prescribed diet compared to those of previous studies, as we provided a high protein diet using varied sources of protein. One kind of dairy that is often preferred by Iranians is yogurt, which may also protect from many gastrointestinal disorders. These balanced sources of dietary protein may be one reason that we did not receive any reports of adverse effects. As the reports show some unfavorable dietary behaviors among Iranian population [36,37], conducting interventional dietary research to clarify the suitable diet is necessary. Conclusion Both prescribed diets had positive effects on anthropometric measurements, but the HP diet resulted in a greater reduction of body weight and waist circumference. Under isoenergetic weight loss diets, total cholesterol, hs-CRP, and SBP were marginally significantly reduced independent of dietary protein content. We were unable to observe significant changes in DBP, HDL-c, and LDL-c cholesterol in the present study. FBG was substantially reduced in both diet groups with a greater reduction in the control group. Hence, an HP diet consisting of 50% plant and 50% animal sources of protein can reduce weight and waist circumference more than a standard protein diet among overweight and obese women. Further investigations are warranted to confirm these findings and elucidate the potential mechanisms that may explain the changes in anthropometric measurements following an HP diet.
v3-fos-license
2023-07-21T15:15:02.975Z
2023-07-01T00:00:00.000
260044593
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2306-5354/10/7/851/pdf?version=1689694142", "pdf_hash": "b37fca9c8cb595c5231befdaf174525270439f64", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41696", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "cbfc0f9e8091c1c8245be73699ff165ee41ca748", "year": 2023 }
pes2o/s2orc
Robust Heart Rate Variability Measurement from Facial Videos Remote Photoplethysmography (rPPG) is a contactless method that enables the detection of various physiological signals from facial videos. rPPG utilizes a digital camera to detect subtle changes in skin color to measure vital signs such as heart rate variability (HRV), an important biomarker related to the autonomous nervous system. This paper presents a novel contactless HRV extraction algorithm, WaveHRV, based on the Wavelet Scattering Transform technique, followed by adaptive bandpass filtering and inter-beat-interval (IBI) analysis. Furthermore, a novel method is introduced to preprocess noisy contact-based PPG signals. WaveHRV is bench-marked against existing algorithms and public datasets. Our results show that WaveHRV is promising and achieves the lowest mean absolute error (MAE) of 10.5 ms and 6.15 ms for RMSSD and SDNN on the UBFCrPPG dataset. Introduction Heart rate variability is the variation in time between consecutive heartbeats. It is closely related to the autonomous nervous system (ANS), actual heart sound, blood pressure, and mental well-being [1]. Traditionally, HRV has been measured using a contact-based electrocardiogram (ECG), which may cause some patients to feel uncomfortable because it requires attaching electrodes to various parts of the body. Recently, non-contact measurement of HRV has gained momentum due to its user-friendly nature and suitability. Contactless HRV can be obtained from an optical technique known as remote plethysmography (rPPG) by using an off-shelf digital camera. In recent years, there has been a growing interest in heart rate variability (HRV) estimation using remote photoplethysmography (rPPG), and many researchers have focused on developing robust and accurate algorithms for this purpose. Typically, a pipeline for rPPG-based HRV estimation includes several stages, such as face detection and tracking, skin segmentation, region of interest (ROI) selection, and rPPG construction [2][3][4][5]. In addition, there are numerous post-processing steps that can be applied to clean, filter, or denoise the rPPG signal to improve the accuracy of HRV estimation. One such study by Mitsuhashi et al. [6] obtained the rPPG signal from facial videos using the spatial subspace rotation (2SR) method [7]. 2SR is an algorithmic method that extracts a pulse signal by calculating the spatial subspace of skin pixels and measuring its temporal movement, and it does not require skin-tone priors. They subsequently applied detrending, heart-rate frequency bandpass filtering (0.75-3 Hz), interpolation, and valley detection to source HRV and estimate stress. Martinez-Delgado et al. [8] employed color amplification on the red channel and peak detection to calculate multiple time-domain and frequency-domain HRV metrics. Qiao et al. [9] utilized independent component analysis (ICA) to obtain the rPPG signal and subsequently applied detrending, normalization, and moving average filter to further clean and smooth the rPPG signal. Afterward, they acquired heart rate and time-domain HRV metrics by detecting the peaks of the cleaned rPPG signal. Li et al. [2] obtained the rPPG signal using a CHROM algorithm [10], a method that exploits color differences in RGB channels to eliminate specular reflection and reduce noise due to motion. Then, they proposed a post-processing denoising step called Slope Sum Function (SSF), which enhances the quality of the signal and facilitates peak detection by increasing the upward trend and decreasing the downward trend of the rPPG signal. Lastly, heat rate and time-domain HRV metrics were evaluated based on the peak detection results. A wavelet-based approach was proposed by Huang et al. [3] and He et al. [4]. Huang et al. [3] sourced the rPPG signal by utilizing the CHROM method [10] and further added a post-processing step based on a continuous wavelet transform, termed CWT-BP and CWT-MAX. CWT-BP is defined as a bandpass filter (0.75-4 Hz), while CWT-MAX is a denoising step based on the scale of the CWT coefficients. During the CWT-MAX step, windows from the signal are chosen and coefficients that have the largest values within a particular window are selected to reconstruct the signal by inverse CWT. He et al. [4] further improved CWT-based denoising methods by introducing CWT-SNR, which selects coefficients based on the signal-to-noise ratio of the reconstructed rPPG signal. Both methods implemented peak-detection algorithms to acquire time-domain HRV metrics and heart rate. In another research, Gudi et al. [5] sourced the rPPG signal by using the plane orthogonal to skin (POS) [11], a method that projects the pulsatile part of the RGB signal to the plane orthogonal to the skin thereby reducing specular and motion noise. Then they applied further motion noise suppression and narrow fixed bandpass filtering to clean the rPPG signal and subsequently extracted the HRV by detecting peaks and applying HRV formulae. They calculated both time-domain and frequency-domain metrics and benchmarked and tested their algorithm on numerous public datasets. Furthermore, they introduced a method to remove noise artifacts from ground truth PPG signals. In another study, Pai et al. [12] introduced a novel approach HRVCam. HRVCam applied signal-to-noise ratio (SNR) based adaptive bandpass filtering to the rPPG signal and then used a discrete energy separation algorithm (DESA) to calculate various frequency bands. These instantaneous frequencies are transformed to the time domain to evaluate time-domain HRV metrics. Overall, traditional methods have focused mostly on post-processing steps such as bandpass filtering, detrending, and continuous wavelet transform to clean noisy rPPG signals. A deep learning approach was presented by Song et al. [13]. According to this approach, first, a candidate rPPG signal is calculated with traditional algorithmic methods such as CHROM [10]. Then, a generative adversarial network (GAN) is employed to filter out and denoise the signal by generating a cleaner version of that rPPG signal. An additional study by Yu et al. [14] proposed an end-to-end deep learning model to obtain an rPPG signal. Their model is based on different 3D-CNN and LSTM networks and benchmarked against heart rate and frequency-domain HRV metrics. All listed HRV algorithms suffer from relatively poor results when compared with ground truth contact-based values. This may be due to limitations in the non-contact measurement techniques used by these algorithms, which can result in inaccuracies and lower overall performance. Additionally, deep learning models require a large amount of data to train on, which can be expensive. Since HRV is highly sensitive to noise, improved algorithms should be devised to decrease the gap between contact and camera HRV. Therefore, in this paper, we introduce the following: 1. A novel HRV algorithm, WaveHRV, based on the Wavelet Scattering Transform technique, followed by adaptive bandpass filtering and statistical analysis of inter-beatintervals (IBIs); 2. Validation of our algorithm on various public datasets, which achieved promising results; 3. An innovative preprocessing step to filter out noisy ground truth data. Method The heart rate variability extraction pipeline from a video is presented in Figure 1. Initially, the subject's face is detected and tracked over time by Medipipe FaceMesh [15]. This is followed by a process of skin segmentation to remove non-skin regions that would improve signal quality. Then, the meanRGB signal is acquired by taking the average of each frame spatially and concatenating them temporally. This meanRGB signal is fed to the plane orthogonal to skin (POS) [11] algorithm to get the rPPG signal candidate. POS is a robust method that projects the pulsatile part of the RGB signal to the plane orthogonal to the skin while employing division and multiplication of different channels to cancel out noise due to motion and other specular artifacts that are assumed to affect all color channels equally. The rPPG signal is interpolated to the nearest power of 2 framerate to make it easier to work with the scattering transform and make the signal spaced equally in time. Subsequently, scattering transform (Section 2.1), windowing method (Section 2.2), and IBI analysis (Section 2.3) are applied to obtain HRV from the interpolated rPPG signal. technique, followed by adaptive bandpass filtering and statistical analysis of inter-beat-intervals (IBIs); 2. Validation of our algorithm on various public datasets, which achieved promising results; 3. An innovative preprocessing step to filter out noisy ground truth data. Method The heart rate variability extraction pipeline from a video is presented in Figure 1. Initially, the subject's face is detected and tracked over time by Medipipe FaceMesh [15]. This is followed by a process of skin segmentation to remove non-skin regions that would improve signal quality. Then, the meanRGB signal is acquired by taking the average of each frame spatially and concatenating them temporally. This meanRGB signal is fed to the plane orthogonal to skin (POS) [11] algorithm to get the rPPG signal candidate. POS is a robust method that projects the pulsatile part of the RGB signal to the plane orthogonal to the skin while employing division and multiplication of different channels to cancel out noise due to motion and other specular artifacts that are assumed to affect all color channels equally. The rPPG signal is interpolated to the nearest power of 2 framerate to make it easier to work with the scattering transform and make the signal spaced equally in time. Subsequently, scattering transform (Section 2.1), windowing method (Section 2.2), and IBI analysis (Section 2.3) are applied to obtain HRV from the interpolated rPPG signal. Scattering Transform The scattering transform (ST) [16] is a complex-valued convolutional neural network (CNN) whose filters are fixed wavelets that has modulus as non-linearity and averaging as pooling. It is invariant to translation, frequency shifting, and change in scale. The wavelet scattering transform can be constructed by taking a signal and passing it through a series of wavelet filters called filter banks and modulus non-linearity. Each wavelet within the filter bank is derived from a single wavelet by changing frequency and time. The output of each layer is then passed through another set of filter banks and modulus non-linearity, creating a hierarchical structure of representations. Each layer captures different levels of time and frequency information, with the first layer Scattering Transform The scattering transform (ST) [16] is a complex-valued convolutional neural network (CNN) whose filters are fixed wavelets that has modulus as non-linearity and averaging as pooling. It is invariant to translation, frequency shifting, and change in scale. The wavelet scattering transform can be constructed by taking a signal and passing it through a series of wavelet filters called filter banks and modulus non-linearity. Each wavelet within the filter bank is derived from a single wavelet by changing frequency and time. The output of each layer is then passed through another set of filter banks and modulus non-linearity, creating a hierarchical structure of representations. Each layer captures different levels of time and frequency information, with the first layer capturing the energy density of the frequencies over time. Nth order coefficients are given by where r(t) is a signal, ψ λ is a wavelet of scale λ, φ is average pooling, | . . . | is complexvalued modulus operation and * is convolutional operation. In this paper, the Kymatio Library [17] was used to implement scattering transform, and the Morlet wavelet was chosen to convolve with the signal, which is given by: where K is a normalization constant, ω is frequency, and t is time. Morlet wavelet has been previously employed in PPG research [18] because its Gaussian envelope ensures that the Morlet wavelet is localized in both time and frequency domains, making it suitable for analyzing signals with non-stationary and time-varying properties. Lastly, an example of coefficients of first-order ST of a PPG signal with a pooling size of 16 s and filter bank of 20 is given in Figure 2. Frequencies in the y-axis increase exponentially, while time in the x-axis is given as discreet numbers that are multiples of 16 s due to chosen pooling size. where r(t) is a signal, ψλ is a wavelet of scale λ, ϕ is average pooling, |...| is complexvalued modulus operation and * is convolutional operation. In this paper, the Kymatio Library [17] was used to implement scattering transform, and the Morlet wavelet was chosen to convolve with the signal, which is given by: where K is a normalization constant, ω is frequency, and t is time. Morlet wavelet has been previously employed in PPG research [18] because its Gaussian envelope ensures that the Morlet wavelet is localized in both time and frequency domains, making it suitable for analyzing signals with non-stationary and time-varying properties. Lastly, an example of coefficients of first-order ST of a PPG signal with a pooling size of 16 s and filter bank of 20 is given in Figure 2. Frequencies in the y-axis increase exponentially, while time in the x-axis is given as discreet numbers that are multiples of 16 s due to chosen pooling size. Windowing The interpolated rPPG signal is first cleaned with Butterworth bandpass filtering of order 7 with band size 0.7-5 Hz to acquire rPPGclean. Then, the first-order scattering transform is applied to the obtained signal as explained in Section 2.1 with a pooling size of 16 s and filter bank of 20. The selection of the pooling size and number of wavelets within the filter bank is task dependent. In the context of our study, simulations revealed that higher frequency resolution generated more favorable outcomes than time resolution. Consequently, a pooling size of 16 s was deemed optimal as it represented a balance between time and frequency resolution. Furthermore, an augmented number of wavelets in the filter bank correlates with an increased frequency resolution. However, this may pose two challenges: firstly, higher computational costs, and secondly, increasing the number of wavelets in the filter bank usually enhances resolution in the higher frequency ranges that are beyond the heart rate region. Afterward, a windowing step, shown in Figure 3, is applied on rPPGclean in the following manner: Windowing The interpolated rPPG signal is first cleaned with Butterworth bandpass filtering of order 7 with band size 0.7-5 Hz to acquire rPPG clean . Then, the first-order scattering transform is applied to the obtained signal as explained in Section 2.1 with a pooling size of 16 s and filter bank of 20. The selection of the pooling size and number of wavelets within the filter bank is task dependent. In the context of our study, simulations revealed that higher frequency resolution generated more favorable outcomes than time resolution. Consequently, a pooling size of 16 s was deemed optimal as it represented a balance between time and frequency resolution. Furthermore, an augmented number of wavelets in the filter bank correlates with an increased frequency resolution. However, this may pose two challenges: firstly, higher computational costs, and secondly, increasing the number of wavelets in the filter bank usually enhances resolution in the higher frequency ranges that are beyond the heart rate region. Afterward, a windowing step, shown in Figure 3, is applied on rPPG clean in the following manner: 1. For each window of size w calculate the energy around the first harmonic by the given equation: 1. For each window of size w calculate the energy around the first harmonic by the given equation: where w is window size, E i is the energy at time, i, and x is the difference between right end of the window and time i. 2. Construct K-Means (K = 3) clustering with frequency and energy, E, as an input and k-mean++ as a centroid initialization to obtain a narrow band, as shown in Figure 4. The centers of the clusters are shown in red in the Figure 4. Then, the band size is [left centroid, right centroid]. 1. For each window of size w calculate the energy around the first harmonic by the given equation: where w is window size, Ei is the energy at time, i, and x is the difference between right end of the window and time i. 2. Construct K-Means (K = 3) clustering with frequency and energy, E, as an input and k-mean++ as a centroid initialization to obtain a narrow band, as shown in Figure 4. The centers of the clusters are shown in red in the Figure 4. Then, the band size is [left centroid, right centroid]. 3. Apply Butterworth bandpass filtering on the windowing signal with previously obtained bands. 4. Subtract the mean of the windowing signal from the windowing signal itself to retain only the pulsatile part and remove the diffuse part. 5. Slide window over whole signal with window size = w and step size = s, which can be optimized for different datasets. For instance, in Figure 3, w = 14.5 s and s = 2 s. 6. Reconstruct the cleaned rPPG signal from the windowing segments by adding the segments. 3. Apply Butterworth bandpass filtering on the windowing signal with previously obtained bands. 4. Subtract the mean of the windowing signal from the windowing signal itself to retain only the pulsatile part and remove the diffuse part. 5. Slide window over whole signal with window size = w and step size = s, which can be optimized for different datasets. For instance, in Figure 3, w = 14.5 s and s = 2 s. 6. Reconstruct the cleaned rPPG signal from the windowing segments by adding the segments. Due to sliding windows, peaks on the edges will be smaller than the rest of the signal. This may result in peak detection issues that can be solved by multiplying both edges of the signal with coefficients (c), as shown in the pseudo-code (Algorithm 1) below: Algorithm 1: peak amplification of the two ends of the signal IBI Analysis Peaks of the reconstructed signal are detected with the automatic multiscale-based peak detection (AMPD) [19] algorithm and inter-beat-intervals (IBIs) are calculated. Then, refined IBIs are calculated by removing physically impossible regions or misplaced peaks and retaining only those IBIs that satisfy the criteria below: 1. Non-overlapping window is slid over IBIs with window size 10. IBIs in each window should satisfy ∀IBI window ∈ mean(IBI window ) ± 0.2mean(IBI window ). ** The boundaries for the IBIs should be chosen based on the task. In this research, we estimate the HRV of adults in a seated position. HRV Metrics SDNN (standard deviation of NN intervals) is a time-domain HRV metric related to the sympathetic nervous system (SNS) and parasympathetic nervous system (PNS) and associated with physical wellness such as blood pressure regulation, heart, vascular tone, and gas exchange [1]. Multiple studies show that [1,20] the range for short-term SDNN (<5 min) is 32-93 ms and it is given by where IBI is the inter-beat interval, and IBI mean is the mean of the inter-beat interval. RMSSD (root mean square of successive differences) is a time-domain HRV metric related to PNS [1] and strongly related to human productivity and energy levels. Short-term RMSSD (<5 min) lies within 19-75 ms [1,20]. It is given by Baevsky SI (Baevsky stress index) is a stress metric that represents the mental or physical stress one is experiencing. It is very sensitive to SNS and has a range of 50 to 1000-1500 depending on stress level and stress-related illnesses [21]. It is derived using time-domain HRV metrics as follows: where AMo(IBI) is mode amplitude of IBIs, Mo(IBI) is the mode of the IBIs, and MxDMn(IBI) is the difference between the maximum and minimum IBI. Finally, LF/HF (low frequency/high frequency) is a frequency-domain HRV metric that represents the balance between the PNS and the SNS [1]. It is calculated by transforming the spectral analysis of IBIs to the frequency domain with the Fast Fourier Transform (FFT). The LF [0.04-0.15 Hz] represents the SNS and the HF [0.15-0.4 Hz] represents the PNS. This is considered a metric that provides insight into the equilibrium of the autonomic nervous system and the resilience of the body to changes, stress, and anxiety [1]. LF/HF values range between 1.1 and 11.6 [1,20]. Evaluation Metrics In this study, we employed several metrics to assess the performance of our proposed model. The metrics used in the study include the follofing: MAE (mean absolute error) is a commonly used metric that measures the average absolute difference between predicted and actual values. MAE is defined as where n is the number of data points, y is the actual value, andŷ is the predicted value. SD (standard deviation) is a measure of the amount of variation or dispersion in a set of values. SD is defined as where n is the number of data points, y is the actual value, andŷ is the predicted value. r (Pearson correlation coefficient) is a measure of the linear correlation between two variables. PCC is defined as where y is the actual value,ŷ is the predicted value, and Cov( . . . ) is the covariance. The paired t-test is a statistical test that compares the means of two related samples. In this study, the paired t-test was used to evaluate the significance of the differences between our model's predicted values and ground truth values. The paired t-test is defined as where d is the mean of the differences between the predicted and actual values, 0 is the null hypothesis value, SDd is the standard deviation of the differences, and n is the number of data points. Dataset To validate the algorithm, we used our private dataset (Stroop) and three publicly available datasets. The summary of these datasets is shown in Table 1. Stroop Dataset Fourteen adults of ages ranging from 18 to 33 and with varying skin tones took part in our experiment. Informed consent was obtained from all subjects. Each subject was seated one meter in front of a Logitech Brio camera that recorded video at 60 fps in ambient room lighting. A CONTEC CMS-60C pulse oximeter set at a frequency of 60 Hz was used to record the ground truth PPG signal. The Stroop test [25] was used to induce cognitive stress and allow for HRV measurement under different experimental stages. In the Stroop test, participants are presented with a series of trials, where each trial consists of a color name, such as "red," "blue," and "green" printed in a certain ink color that may or may not match the word itself. The task requires the participant to identify the ink color while ignoring the word itself within a short span of time. The test consisted of three parts: the Rest Stage (1 min), the Stroop test with sound stimulus (3 min), and the Stroop test without sound stimulus (3 min). Subjects were allowed to relax for 2 min between each part. During the Stroop test with sound stimulus, participants heard a pleasant or irritating audio sound depending on whether they gave the correct answer. Publicly Available Datasets UBFC rPPG [22] consists of 42 subjects and 42 videos. Each video is approximately a minute long, 30 fps, and uncompressed. Videos are recorded in uniform, ambient lighting, and subjects play math tests to induce stress and increase heart rate. VIPL-HR [23] consists of 107 subjects and 2378 videos. Video lengths range from 10 s to 1 min. Videos are compressed and recorded by three different devices. Subjects are recorded under seven different scenarios: stable scenario, talking scenario, large head movements, dark lighting, bright lighting, long distance scenario, and after exercise. In this paper, we only used videos that are longer than 16 s because it is difficult to obtain meaningful HRV results based on measurements that are less than 15 s. The number of selected videos is 1968. MAHNOB-HCI [24] consists of 27 subjects and 3465 videos. To induce different emotions and feelings of stress, subjects watch different videos while sitting in front of the camera. Videos are compressed and range from 5 s to 3.5 min. In this dataset as well, only videos that are longer than 16 s are selected. The number of selected videos is 1095. Dataset Preprocessing HRV is a sensitive biomarker and even a slight disturbance during the data collection process can alter the outcome dramatically. This paper [26] shows the impact of false peaks on HRV measurement and points out that if a small percentage of peaks are dislocated, HRV results will be significantly different. Therefore, noisy ground truth data must be preprocessed before being used as a benchmark to compare with camera HRV. There are several reasons why ground truth data is noisy: disconnection of the ground truth device, poor connection of electrodes with the body, body motion during data collection, slight motion of the fingers inside a pulse oximeter, etc. Examples of a noisy and clean PPG signal are shown in Figure 5. To filter out these noisy ground truth data, we came up with criteria based on biological restrictions and data analysis. First, since we are calculating HRV from the face, any obstacle between the face and the camera leads to discontinuity in the signal. Therefore, this type of data is discarded. Second, if the measured heart rate is beyond physiological and biological limits at any point, then the subject is disconnected from the ground truth data-collecting device. This kind of ground truth data cannot be used as a reference. Third, van Gent et al. [26] demonstrate that false peaks change HRV results significantly and that removing them is an optimal solution. They suggest removing IBIs that are off by 30% from the meanIBI of the chosen segment. Finally, this paper [1] reports results of more than 20 studies concluding that short-term SDNN and RMSSD (<5 min) should be less than 92 ms and 75 ms respectively. Contact-based PPG and ECG HRV results that are beyond the physiologically possible region should be removed. These criteria can be summarized as follows: 1. Remove data with a covered face at any instant in time 2. Remove data that have SDNN > 100 ms or RMSSD > 100 ms Bioengineering 2023, 10, x FOR PEER REVIEW 9 of 17 collecting device. This kind of ground truth data cannot be used as a reference. Third, van Gent et al. [26] demonstrate that false peaks change HRV results significantly and that removing them is an optimal solution. They suggest removing IBIs that are off by 30% from the meanIBI of the chosen segment. Finally, this paper [1] reports results of more than 20 studies concluding that short-term SDNN and RMSSD (<5 mins) should be less than 92 ms and 75 ms respectively. Contact-based PPG and ECG HRV results that are beyond the physiologically possible region should be removed. These criteria can be summarized as follows: Benchmarking WaveHRV We reported the results of WaveHRV on publicly available datasets in Table 2 for SDNN and Table 3 for RMSSD. All other algorithms except FaceRPPG reported their results on the UBFC rPPG dataset only. It can be seen from Tables 2 and 3 that WaveHRV outperformed all other methods by a significant margin except FaceRPPG RMSSD in Benchmarking WaveHRV We reported the results of WaveHRV on publicly available datasets in Table 2 for SDNN and Table 3 for RMSSD. All other algorithms except FaceRPPG reported their results on the UBFC rPPG dataset only. It can be seen from Tables 2 and 3 that WaveHRV outperformed all other methods by a significant margin except FaceRPPG RMSSD in UBFC rPPG dataset. However, it should be noted that all FaceRPPG results are benchmarked against the cleaned and filtered version of datasets. Furthermore, we observed that VIPL-HR and MAHNOB-HCI have large MAEs and even larger standard deviations. WaveHRV on the Preprocessed Datasets After filtering out noisy ground truth data according to the criteria mentioned in Section 4.3, we secure the results presented in Table 4. When comparing the results of Table 4 against Tables 2 and 3, we see that the proposed ground truth preprocessing method performed well. MAE of SDNN of UBFC rPPG decreased from 10.5 ms to 6.15 ms, whereas RMSSD decreased from 16 ms to 10.46 ms. The effect of the proposed criteria is very noticeable on MAHNOB-HCI and VIPL-HR. By looking at the tables, we can see that the SDNN MAE of VIPL-HR decreased from 29 ms to 13.3 ms, and RMSSD MAE of VIPL-HR decreased from 41 ms to 15.1 ms. As for MAHNOB-HCI, SDNN MAE decreased from 69 ms to 17.5 ms, while RMSSD MAE decreased from 93 ms to 21.5 ms. When we look at the SD of VIPL-HR and MAHNOB-HCI, we see that the SD of VIPL-HR decreased from 45 ms to 11.1 ms for SDNN and from 70 ms to 13.1 ms for RMSSD. The SD of MAHNOB-HCI decreased from 234 ms to 12.5 ms for SDNN and from 317 ms to 14.5 ms for RMSSD. Bland-Altman plots of SDNN and RMSSD of three preprocessed datasets namely UBFC rPPG, VIPL-HR, and Stroop are shown in Figure 6b, Figure 7b, Figure 8, and Figure 9, respectively. It can be noticed from Figure 6b that for the UBFC rPPG dataset, the mean difference between ground truth and WaveHRV SDNN is 2.62 ms, and the paired t-test p-value = 0.05. Similarly, hypothesis testing for RMSSD gives p-value = 0.24. This implies that for a 95% confidence interval (CI), the average WaveHRV SDNN and RMSSD are similar or equal to the average ground truth SDNN and RMSSD. Correlation plots of SDNN and RMSSD for preprocessed UBFC rPPG are demonstrated in Figures 6a and 7a. It can be noted that the Pearson correlation coefficients between WaveHRV and ground truth are 0.83 and 0.59 for SDNN and RMSSD, respectively. Stoop dataset results ( Figure 8) indicate that SDNN mean difference between WaveHRV and ground truth is −0.29 ms, whereas the RMSSD mean difference is 4.03 ms. Hypothesis testing between contact and camera HRV gives p-values of 0.83 and 0.09 for SDNN and RMSSD respectively. It means that at a 95% CI, average WaveHRV SDNN and RMSSD are not different from ground truth SDNN and RMSSD. Furthermore, looking into Bland-Altman plots of the VIPL-HR dataset i n Figure 9, it can be observed that SDNN mean error is 1.44 ms (p-value = 0.02) and RMSSD −1.58 ms (p-value = 0.06). Paired t-test reveals that at 95% CI mean WaveHRV SDNN is different from the mean ground truth SDNN, while the mean WaveHRV RMSSD is equal to the mean ground truth RMSSD. Finally, MAHNOB-HCI has SDNN −2.72 ms mean error and RMSSD −8.5 ms mean error corresponding to p-values = 0.02 and 10 −4 respectively. Statistical Analysis at a 95% Confidence Interval implies that average WaveHRV SDNN and RMSSD are different from average ground truth SDNN and RMSSD. Furthermore, looking into Bland-Altman plots of the VIPL-HR dataset in Figure 9, it can be observed that SDNN mean error is 1.44 ms (p-value = 0.02) and RMSSD −1.58 ms (p-value = 0.06). Paired t-test reveals that at 95% CI mean WaveHRV SDNN is different from the mean ground truth SDNN, while the mean WaveHRV RMSSD is equal to the mean ground truth RMSSD. Finally, MAHNOB-HCI has SDNN −2.72 ms mean error and RMSSD −8.5 ms mean error corresponding to p-values = 0.02 and 10 −4 respectively. Statistical Analysis at a 95% Confidence Interval implies that average WaveHRV SDNN and RMSSD are different from average ground truth SDNN and RMSSD. Stress Measurement The performance of WaveHRV on physiological stress-related metrics is given in Table 5. To get better frequency resolution in frequency-domain metrics, videos that are longer than 30 s are considered in this part. LF/HF is a metric of homeostasis and resilience of the autonomous nervous system (ANS) to stress and anxiety. LF/HF values range between 1-11.5 and Table 5 illustrates that LF/HF MAEs lie between 0.26-0.67. Therefore, WaveHRV could be used to obtain LF/HF and has the potential to offer insights into the balance and equilibrium of ANS. Stress Measurement The performance of WaveHRV on physiological stress-related metrics is given in Table 5. To get better frequency resolution in frequency-domain metrics, videos that are longer than 30 s are considered in this part. LF/HF is a metric of homeostasis and resilience of the autonomous nervous system (ANS) to stress and anxiety. LF/HF values range between 1-11.5 and Table 5 illustrates that LF/HF MAEs lie between 0.26-0.67. Therefore, WaveHRV could be used to obtain LF/HF and has the potential to offer insights into the balance and equilibrium of ANS. The Baevsky stress index (BaevskySI), also known as the strain index, characterizes a person's sympathetic nervous system activity (SNS) and is a good indicator of physical and mental load. Table 5 reveals that the MAE of BaevskySI from the contact-based device and WaveHRV is within 40-60 for UBFC rPPG, Stroop, and MAHNOB-HCI datasets, while VIPL-HR has BaevskySI MAE ≈ 100. As mentioned above in Section 3.1, BaevskySI has values between 50-1500, and looking at the results of WaveHRV, it can be inferred that our algorithm can be utilized to categorize and identify different stress levels. Discussion It has been revealed that both the MAE and SD of VIPL-HR and MAHNOB-HCI datasets have significantly dropped after the implementation of the data preprocessing step mentioned in Section 4. The primary reason for this phenomenon is caused by disconnected or poorly connected electrodes and pulse oximeters, slight motion of fingers inside pulse oximeters, and motion during data collection. Furthermore, from Table 4, we note that WaveHRV has lower MAEs on UBFC rPPG and Stroop datasets than on challenging datasets like VIPL-HR and MAHNOB-HCI. UBFC rPPG and Stroop are not compressed and have uniform ambient light, whereas VIPL-HR and MAHNOB-HCI are compressed and recorded under non-uniform or dim lighting. Moreover, in some scenarios of the VIPL-HR, subjects perform large head movements, talk, or are sited further away from the camera. Similar conclusions can be attained from statistical analyses and Bland-Altman plots: when the subjects are not under frequent motion and in a well-lit environment like UBFC rPPG and Stroop datasets, the average WaveHRV SDNN and RMSSD are similar to ground truth SDNN and RMSSD. However, for more challenging, real-life scenarios where there is significant motion and poor lighting conditions like VIPL-HR and MAHNOB-HCI, mean WaveHRV results are different from mean ground truth results. Conclusions In this paper, we have presented WaveHRV, a novel algorithm for HRV extraction from a portable camera. We benchmarked our algorithm against other methods and demonstrated that WaveHRV outperforms other methods on publicly available datasets. Furthermore, we presented a straightforward yet powerful technique to clean ground truth data and highlighted its performance. We also demonstrated the potential for an off-shelf camera to measure stress and mental well-being via the Baevsky stress index. A further direction for this research would include the improvement of HRV algorithms under challenging scenarios such as large head movements and dim lighting to reduce the discrepancy between camera HRV and contact HRV. In addition, work could examine the relationship between HRV and different stress, energy, and productivity metrics. Data Availability Statement: Data sharing not applicable. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2019-01-06T13:35:02.972Z
2016-03-31T00:00:00.000
85537324
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nepjol.info/index.php/IJASBT/article/download/14719/11933", "pdf_hash": "6bdc6248e67da8cc6584aa6bf9c920625ae6ed94", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41698", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "6bdc6248e67da8cc6584aa6bf9c920625ae6ed94", "year": 2016 }
pes2o/s2orc
IN VITRO CALLUS PROLIFERATION FROM LEAF EXPLANTS OF GREEN GRAM AFTER IN SITU UV-B EXPOSURE Callus induction was tried with leaf explants (third leaf from top of canopy) harvested from in situ control and supplementary UV-B irradiated (UV-B = 2 hours daily @ 12.2 kJ m d; ambient = 10 kJ m d) three varieties of green gram viz. CO-8, NVL-585 and VAMBAN-2 to study their viability for germplam conservation. Callus induction occurred both in control and UV-B stressed NVL-585 leaf explants. VAMBAN-2 both in unstressed and UV-B stressed conditions did not initiate callus. Only control leaf explants from CO-8 proliferated callus. Callus of UV-B irradiated NVL-585 weighed less (51.28 %) than control. Parenchyma cells were smaller in callus inducted from in situ UV-B exposed NVL-585 leaf explants. The leaf explants from UV-B stressed NVL-585 varieties of green gram responded to in vitro callus proliferation making them fit for germplasm conservation for cultivating in UV-B elevated environment. In situ UV-B radiation Green gram (Vigna radiata (L.) Wilczek.), the nitrogen fixing grain legume was chosen for the study.Viable seeds of the three varieties of green gram viz.CO-8, NVL-585 and VAMBAN-2 were procured from Saravana Farms, Villupuram, Tamil Nadu and from local farmers in Pondicherry.The seeds were selected for uniform colour, size and weight and used in the experiments.The crops were grown in pot culture in the naturally lit greenhouse (day temperature maximum 38 ± 2 ºC, night temperature minimum 18 ± 2 ºC, relative humidity 60 ± 5 %, maximum irradiance (PAR) 1400 μmol m -2 s -1 , photoperiod 12 to 14 h).Supplementary UV-B radiation was provided in UV garden by three UV-B lamps (Philips TL20W/12 Sunlamps, The Netherlands), which were suspended horizontally and wrapped with cellulose diacetate filters (0.076 mm) to filter UV-C radiation (< 280 nm).UV-B exposure was given for 2 h daily from 10:00 to 11:00 and 15:00 to 16:00 starting from the 5 DAS (days after seed germination).Plants received a biologically effective UV-B dose (UV-BBE) of 12.2 kJ m -2 d -1 equivalent to a simulated 20 % ozone depletion at Pondicherry (12º2'N, India).The control plants, grown under natural solar radiation, received UV-BBE 10 kJ m -2 d -1 .Leaf explants (third leaf from top of canopy) were harvested from 30 DAS crops that received supplementary UV-B irradiation and sunlight in the in situ condition. In vitro culture with leaf explants Leaf explants after appropriate aseptic treatment were used for in vitro culture.Leaf discs were thoroughly washed with water containing 0.1% Bavistin (a systemic fungicide BASF, India Ltd., Bombay) for 4-5 minutes.They were surface sterilized with 0.1% HgCl2 for 4-5 minutes and washed 6 to 8 times with autoclaved water under Laminar Air Flow Cabinet (Technico Systems, Chennai) and inoculated aseptically onto culture medium.The final wash was given with aqueous sterilized solution of (0.1%) ascorbic acid.The surface sterilized explants were dipped in 90% ethanol for a short period (40 seconds). The leaf discs were inoculated horizontally on MS medium for culture initiation.Different concentration and combination of cytokinins (6-benzyl amino purine -BAP and Kinetin ranging from 0.1 to 5.0 mg L -1 ) and auxins (IAA -Indole acetic acid ranging from 0.1 to 1.0 mg L -1 ) were incorporated in the medium for inducing callus.These cultures were incubated at 28±2˚C in the dark for 2-3 days.Subsequently these were kept under diffused light (22 µ mol m -2 s -1 SFP-spectral flux photon) for 8 to 10 days.The light was provided by fluorescent tubes and incandescent bulbs.Temperature was maintained by window air conditioners.Positive air pressure was maintained in the culture rooms, in order to regulate temperature and to maintain aseptic conditions.The cultures were regularly monitored and the callus proliferation were recorded after 30 DAI (days after inoculation).The experiments were carried out with three replicates per treatment. The plant tissue culture media generally comprise of inorganic salts, organic compounds, vitamins, gelling agents like agar-agar.All the components were dissolved in distilled water except growth regulators.Auxins were dissolved in 0.5N NaOH or ethanol and cytokinins were dissolved in dilute 0.1N HCl or NaOH.For the present study MS basal medium (Murashige and Skoog 1962) was used as nutrient medium. MS basal medium was used either as such or with certain modification in their composition.Sucrose and sugar cubes were added as a source of carbohydrate.The pH of the media was adjusted to 5.8 ± 2 with 0.5N NaOH or 0.1N HCl before autoclaving.The medium was poured in the culture vessels.Finally the medium was steam sterilized by autoclaving at 15 psi (pounds per square inch) pressure at 121˚C for 15 minutes. Chemical composition of MS medium (Murashige and Skoog 1962) Constituents Quantity (mgL Preparation of MS medium Approximately 90 % of the required volume of the deionized-distilled water was measured in a container of double the size of the required volume.Dehydrated medium was added into the water and stirred to dissolve the medium completely.The solution was gently heated to bring the powder into solution.Desired heat stable supplements were added to the medium solution.Deionized-distilled water was added to the medium solution to obtain the final required volume.The pH was adjusted to required level with NaOH or HCl.The medium was finally dispensed into culture vessels.The medium was sterilized by autoclaving at 15 psi at 121˚C for appropriate period of time. Photography The anatomical features were viewed through Nikon Labomed microscope under incident and translucent light and photographed using Sony digital camera fitted with Olympus adaptor.The culture tubes with leaf explants and callus were photographed in daylight using a Sony digital camera fitted with appropriate close-up accessories. Dendrogram At least three replicates were maintained for all treatments and control.The experiments were repeated to confirm the trends.The result of single linkage clustering (Maskay 1998) was displayed graphically in the form of a diagram called dendrogram (Everstt 1985).The similarity indices between the three varieties of green gram under study were calculated using the formula given by Bhat and Kudesia (2011). Similarity Index = Total number of similar characters Total number of characters studied × 100 Based on the similarity indices between the three varieties of green gram, dendrogram was drawn to derive the interrelationship between them and presented in Table 2 and Plate 5. 1 ) and auxins (IAA -Indole acetic acid = 1.0 mgL -1 ) was found to be best suited for initiating callus in leaf explants (Plate 1) and used for leaf explants harvested from all varieties of green gram (Plate 2). In vitro callus induction In leaf explants proliferation of callus occurred only in two green gram varieties out of the three varieties taken for study (Table 1; Plate 2, 3).Callus induction was observed in control leaf explants of CO-8 and both in control leaf explants as well as in leaf explants harvested from in situ supplementary UV-B irradiated NVL-585 crops.However, all leaf explants from VAMBAN-2 failed to induct callus.The induction of callus was delayed by one day in explants harvested from in situ UV-B irradiated NVL-585 compared with the control (Table 1).However, in CO-8 callus was inducted one day earlier to NVL-585 control. The callus of in situ UV-B stressed NVL-585 variety of green gram weighed less by 51.28 % compared to control.The control CO-8 had better callus as evident by an accumulation in the fresh biomass by 10 % over the control callus of NVL-585.The trend observed in fresh weight continued in dry weight of callus also.The callus of in situ UV-B irradiated NVL-585 variety weighed less by 78.42 % below control on 30 DAI, while CO-8 control recorded the maximum (Table 1).The parenchyma cells of calluses proliferated from leaf explants were isodiametric with thin cell walls and were distributed evenly all through the callus in control samples (Plate 4, Fig. 1, 2).The parenchyma cells that have proliferated from the in situ UV-B irradiated callus were smaller by 29.67 % and more in number by 17 % in NVL-585 than their control (Plate 4, Fig. 2, 3).This is in accordance with the findings of Rajendiran et al. (2014a) who have reported the failure of leaf explants of some of the varieties of cowpea to proliferate callus after ultraviolet-B irradiation under in vitro culture.Rajendiran et al. (2014b) and Rajendiran et al. (2014c) opine that the response shown by stem explants and seeds to in vitro culture differ based on the sensitivity of the crops to in vitro culture conditions.Rajendiran et al. (2015a) in Amaranthus dubius Mart.Ex.Thell., Rajendiran et al. (2015e) in Macrotyloma uniflorum (Lam.)Verdc., Rajendiran et al. (2015f) in Momordica charantia L., Rajendiran et al. (2015g) in Spinacia oleracea L., Rajendiran et al. (2015h) in Trigonella foenum-graecum (L.) Ser., Rajendiran et al. (2015i) Conclusion From the present experiment it is evident that the leaves of NVL-585 variety of green gram are the appropriate explants for germplasm conservation for surviving in UV-B elevated environment, as they are the only plant material to respond to in vitro culture after UV-B irradiation. Plate 1 : Standardisation of Kinetin (K) concentration in culture media for in vitro regeneration from leaf explants using Vigna radiata (L.) Wilczek var.NVL-585 control samples.(7 DAI -Days after inoculation) Plate 3 : A closer view of calluses formed in two out of three varieties of Vigna radiata (L.) Wilczek from leaf explants of control and UV-B irradiated plants.Plate 4: Cross section of calluses formed in two out of three varieties of Vigna radiata (L.) Wilczek from leaf explants of control and UV-B irradiated plants.(All figs.400x) Dendrogram The variation shown by leaf explants of three varieties of green gram after in situ supplementary UV-B radiation in the parameters viz., time taken for callus initiation, fresh and dry weight of callus, frequency and size of parenchyma cells in callus was reflected in the dendrogram.The two varieties viz., CO-8 and NVL-585 had 42.86 % similarity and they formed one group as their leaf explants harvested from either control or UV-B stressed crops or both proliferated callus.VAMBAN-2 remained alone in the cluster showing very little similarity (14.29 %) with other two varieties, as both control and UV-B stressed explants failed to induct callus (Table 2; Plate 5). Plate 5 : Dendrogram showing the interrelationship between the three varieties of Vigna radiata (L.) Wilczek in callus proliferation from leaf discs of control and supplementary UV-B irradiated plants -In vitro. Table 1 : Characteristics of callus proliferation in leaf explants of three varieties of 30 DAI Vigna radiata (L.) Wilczek from control and supplementary UV-B exposed conditions -In vitro. Table 2 : The similarity indices in callus proliferation from leaf explants of three varieties of Vigna radiata (L.) Wilczek after supplementary UV-B exposure -In vitro. Plate 2: Comparison of in vitro callus proliferation from leaf explants of three varieties of Vigna radiata (L.) Wilczek on 30 DAI (Days after inoculation).
v3-fos-license
2022-08-20T15:04:36.826Z
2022-08-01T00:00:00.000
251676558
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2306-5354/9/8/398/pdf?version=1660730228", "pdf_hash": "cd8c639bef8d12915727420c4ced9cbf7815d2bc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41699", "s2fieldsofstudy": [ "Biology" ], "sha1": "ad2634eac101719ba5e71071a556e2f3c9a48f93", "year": 2022 }
pes2o/s2orc
The Homeodomain-Leucine Zipper Genes Family Regulates the Jinggangmycin Mediated Immune Response of Oryza sativa to Nilaparvata lugens, and Laodelphax striatellus The homeodomain-leucine zipper (HDZIP) is an important transcription factor family, instrumental not only in growth but in finetuning plant responses to environmental adversaries. Despite the plethora of literature available, the role of HDZIP genes under chewing and sucking insects remains elusive. Herein, we identified 40 OsHDZIP genes from the rice genome database. The evolutionary relationship, gene structure, conserved motifs, and chemical properties highlight the key aspects of OsHDZIP genes in rice. The OsHDZIP family is divided into a further four subfamilies (i.e., HDZIP I, HDZIP II, HDZIP III, and HDZIP IV). Moreover, the protein–protein interaction and Gene Ontology (GO) analysis showed that OsHDZIP genes regulate plant growth and response to various environmental stimuli. Various microRNA (miRNA) families targeted HDZIP III subfamily genes. The microarray data analysis showed that OsHDZIP was expressed in almost all tested tissues. Additionally, the differential expression patterns of the OsHDZIP genes were found under salinity stress and hormonal treatments, whereas under brown planthopper (BPH), striped stem borer (SSB), and rice leaf folder (RLF), only OsHDZIP3, OsHDZIP4, OsHDZIP40, OsHDZIP10, and OsHDZIP20 displayed expression. The qRT-PCR analysis further validated the expression of OsHDZIP20, OsHDZIP40, and OsHDZIP10 under BPH, small brown planthopper (SBPH) infestations, and jinggangmycin (JGM) spraying applications. Our results provide detailed knowledge of the OsHDZIP gene family resistance in rice plants and will facilitate the development of stress-resilient cultivars, particularly against chewing and sucking insect pests. Introduction The static nature of plants entails the frequent endurance of various environmental stresses. In response, plants have developed various mechanisms to adjust to the constantly changing environment [1]. These developmental processes are frequently regulated by numerous transcription factors (TFs) [2]. TFs spreads throughout the genome and can bond with certain functional cis-elements, facilitating the plant's response to various environmental stimuli. The homeodomain-leucine zipper (HDZIP) is a transcription factor family that plays a vital role in plant growth, developmental processes, and stress response [3]. The HDZIP is a class of homeobox proteins containing the homeodomain (HD) and leucine zipper (LZ) motifs [3,4]. These two motifs are the signatures of the HDZIP family and have been found in all eukaryotic species. However, their interaction with a single protein is only found in plants and, therefore, HDZIP in Plantae is different than in other organisms [5]. Based on their structure, sequence composition, functional characteristics, and phylogenetic relationship, the HDZIPs are divided into four subfamilies (i.e., HDZIP I, HDZIP II, HDZIP III, and HDZIP IV). Additionally, each subfamily has a unique function and forms a complex interactive network throughout the plant's developmental phases [6][7][8]. Phylogenetic Tree, Motif, and Digital Expression Analysis To obtain detailed knowledge regarding the evolutionary relationship of the HDZIP gene family in rice with the developed genome species. Here, we investigated the phylogenetic relationships of the HDZIP gene family members of rice with model plants. Firstly, the HDZIP amino acid sequences were downloaded from their corresponding genome database and were then aligned through ClustalW software (version 2.1) (http: //www.genome.jp/tools/clustalw/) (accessed on 22 February 2022) following the default parameters to examine the evolutionary relationships among the sequences and construct the maximum likelihood phylogenetic tree using MEGA (version 7.0) [35]. Furthermore, the conserved protein motifs of the HDZIP family of O. sativa were predicted using the MEME online server (Version 4.12.0) (http://meme-suite.org/) (accessed 10 March 2022) with the default settings. The details of the top 10 predicted motifs were obtained from the MEME suite. The conserved domains of the HDZIP gene family of O. sativa were predicted using the NCBI-CDD (http://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi) (accessed 10 March 2022). The conserved domain and motif distribution were drawn via Microsoft PowerPoint 365 software. Finally, for the visualization of the heatmap, TBtools (Version 1.098765) was used, and the transcriptomic data were retrieved from the rice genome database and GEO dataset platform from the NCBI [27,32]. Cis-Elements and Gene Ontology of the OsHDZIP Genes To determine the cis-regulatory elements in each OsHDZIP gene, 1.5 kb of an upstream genomic DNA sequence with the starting codon (ATG) was obtained from the rice genome sequence database. Further, we used the plantCARE database (http://bioinformatics. psb.ugent.be/webtools/plantcare/html/) (accessed on 26 February 2022) to identify the cis-elements in the promoter regions of 40 OsHDZIP genes of O. sativa. Furthermore, for the GO analysis, the OsHDZIP protein sequences were downloaded from the iTAK-Plant Transcription Factor and Protein Kinase Identifier and Classifier (http://itak.feilab.net/cgibin/itak/index.cgi) (accessed on 5 March 2022). The obtained OsHDZIP protein sequences were implied in the "CELLO2GO" online server to determine the predicted functions, such as molecular functions, biological processes, and cellular components, and finally, the GO classifications were recovered using Microsoft Excel 365 software. Interactive Protein Analysis of the OsHDZIP Genes The online server String (https://string-db.org) (accessed on 8 February 2022) was used for the interactive protein network analysis (accessed on 8 March 2022), using the O. sativa OsHDZIP2 protein as a reference following the default advanced settings [36]. Furthermore, pathway enrichment analysis was carried out by searching for OsHDZIP genes in the rice genome database's online pathway enrichment tool [32]. Prediction of Putative MicroRNAs Targeting OsHDZIP Genes To predict putative miRNA target sites in the OsHDZIP genes in the rice plants under BPH and SBP infestations and JGM spraying applications (briefly described in Section 2), the sequences of rice miRNAs were downloaded from the rice genome database. Moreover, the OsHDZIP CDS sequences were submitted to the online psRNA Target (Server18) with the default parameters for predicting potential miRNAs in OsHDZIP genes. The interactive Bioengineering 2022, 9, 398 4 of 25 network between the predicted miRNAs and OsHDZIP targeted genes were constructed and visualized using Cytoscape software (version 3.919) by the Institute for Systems Biology (Seattle, Washington 98103, USA) following the same procedure by Rizwan et al. [37]. Insect Rearing and Chemical and Stress Treatment The insects (i.e., BPH and SBPH) and the rice variety used in the study, Ninjing4, were initially obtained from the China National Rice Research Institute insect repository (Hangzhou, China). The Ninjing4 rice variety, known to be resistanceless to these insect infestations, was used for insect rearing. Initially, the BPH and SBPH colonies were reared on the rice seedlings in cement tanks covered with fine mesh outdoors (i.e., natural conditions) for six months (i.e., April to October) and overwintered in lab-controlled conditions. First, we soaked the seeds for 24 h in a water dip plastic tray with a standard size of one-quarter (60 cm H_100 cm W_200 cm L) in standard conditions of 26 ± 2 • C with the 16 h L:8 h D in relative humidity of 80 ± 10% in the ecological laboratory of Yangzhou University. The germinated seeds were transferred to cement tanks covered with fine mesh in an outdoor natural environment and were grown until the six-leaf seedling stage. Secondly, the seedlings were then transferred into plastic pots (dimensions = R 1 4 16 cm). Finally, the stress treatments proceeded at the tillering stage (40 ± 2, 40 ± 4, and 40 ± 8 days). The JGM technical grade of 61.7% used in this study was obtained from the Qianjiang Biochemistry Co., Ltd. (Haining, Zhejiang, China). Following the protocol of in a previous report by Ahmad et al. (2022), the two hundred parts per million (PPM) solution was Tween 20 obtained from the Sinopsin Group Chemical Reagent Company (Shanghai, China). The fungicide was then sprayed on the rice seedlings, following the procedure in a previous study [27,38]. Expression Profiling of HDZIP Genes in Oryza sativa Forty day-old (40 ± 2) rice plants were exposed to BPH and SBPH stress, and samples were collected at 2, 4, and 8 days after infestation. Similarly, JGM was sprayed on the rice plants, and samples were taken 2, 4, and 8 days after treatment. The samples were then stored at −80 • C degrees until further experiments. After that, the total RNA was extracted from the samples using kits (Vazyme, Nanjing, China). First, the DNA was removed using DNase I, the concentration and purity were measured with a NanoDrop 1000 spectrophotometer (Thermo Fisher Scientific, Rockford, IL, USA), and the integrity was checked using 1.5% agarose gel electrophoresis. Finally, the resulting cDNA was used as a template for qPCR (quantitative real-time PCR) analysis using SYBR Green real-time PCR master mix (Vazyme, Nanjing, China). The qPCR assays were performed in triplicate using a real-time PCR system (Bio-Rad, Hercules, CA, USA) following the manufacturer's protocol [27]. Furthermore, the 2 µL aliquots of cDNA were amplified by qPCR in 20 µL reaction volumes using the SYBR Premix Ex Taq TM II (TaKaRa, Dalian, China). The cDNAs were amplified at 95 • C for 2 min, followed by 35 cycles of 10 s at 95 • C, then for 30 s, and at 72 • C for 30 s, with a final extension step of 72 • C for 10 min in a CFX96 real-time PCR system (Bio-Rad Co., Ltd., CA, USA). The mRNA amounts of all genes were separately quantified with the stable expression of the constitutive reference gene, actin. The specific primers are listed in (Table S1). After amplification, the target gene cycle threshold (Ct) values were normalized to the reference gene by the 2 −∆∆CT method [39]. The data's mean values of three biologically independent replicates were used for the final graphs [27]. Statistical Analysis The data presented in this paper were analyzed using the SPSS software (version 25.0, SPSS Inc., Chicago, IL, USA) for statistical analysis (ANOVA), statistical significance, and a 95% confidence interval (p ≤ 0.05). The data were analyzed and are expressed as the mean ± standard deviation (SD) of three biologically independent replicates in all measured parameters, and finally, GraphPad Prism (Version 8.0.2) (GraphPad Software, Inc., LA Jolla, CA, USA) was used for graphical representation [27]. Identification and Sequence Analysis of OsHDZIP Genes in Oryza sativa We retrieved 40 rice OsHDZIP transcription factors from the rice genome database. All genes had a nomenclature of OsHDZ1 to OsHDZ40 (Table 1). Among the 40 OsHDZIP genes family members, 35 OsHDZIP genes resided in the nucleus, whereas OsHDZIP3, OsHDZIP7, OsHDZIP13, and OsHDZIP40 were in the plasma membrane, while 2 genes, OsHDZIP28 and OsHDZIP37, resided in the cytoplasm. Other features of Oryza sativa OsHDZIP protein were identified, such as locus ID, subfamilies, chromosomal coordinates, molecular weight, chemical properties, and isoelectric point (PI), and tabulated. The OsHDZIP Genes Conservative Domain Analysis The HDZIP gene family consisted of two functional domains: homeodomain (HD) and leucine zipper (LZ). Based on their sequence conservation and functional properties, the HDZIP gene family was divided into four subfamilies (i.e., HDZIP I, HDZIP II, HDZIP III, and HDZIP IV). The HDZIP I subfamily contains the HD and LZ domains among the four subfamilies; HDZIP II contains a similar HD and LZ domain with an additional CPSCE motif; HDZIP III and HDZIP IV contain HD and LZ with an additional START and SAD domain. Only HDZIP III possesses the highly conserved MEKHLA domain ( Figure 1). The domain distribution and structure of the OsHDZIP genes in Oryza sativa are shown in Table S3. The OsHDZIP Genes Conservative Domain Analysis The HDZIP gene family consisted of two functional domains: homeodomain (HD) and leucine zipper (LZ). Based on their sequence conservation and functional properties, the HDZIP gene family was divided into four subfamilies (i.e., HDZIP I, HDZIP II, HDZIP III, and HDZIP IV). The HDZIP I subfamily contains the HD and LZ domains among the four subfamilies; HDZIP II contains a similar HD and LZ domain with an additional CPSCE motif; HDZIP III and HDZIP IV contain HD and LZ with an additional START and SAD domain. Only HDZIP III possesses the highly conserved MEKHLA domain (Figure 1). The domain distribution and structure of the OsHDZIP genes in Oryza sativa are shown in Table S3. Evolutionary Relationship of Oryza sativa HDZIP Genes After alignment, a maximum likelihood phylogenetic tree was constructed to gain insightful knowledge regarding the evolutionary relationship of the homeodomain-leucine zipper in Arabidopsis thaliana [30], Cucumis sativus [8], and O. sativa [32] (Figure 2). The protein sequence of the rice (40 OsHDZIP), A. thaliana (47 AtHDZIP) [40], and cucumber (40 CsHDZIP) [8] were downloaded from their respective databases and through the maximum likelihood method, the phylogenetic tree was constructed. Following the same procedure as our previously published article, first, all sequences were aligned using ClustalX software with default parameters, then the phylogenetic tree was constructed using MEGA6 software, and the final tree was programmed using Interactive Tree Of Life (iTOL) (version 5) (accessed on 20 March 2022) [41]. Additionally, the OsHDZIP proteins were clustered into four subfamilies based on their phylogenetic relationships: HDZIPI, HDZIPII, HDZIPIII, and HDZIPIV, and the number of OsHDZIP proteins was measured in each subfamily. Subfamily I accounted for 13 proteins, followed by subgroup II with 12 proteins, subfamily IV with 11 proteins, and subfamily III with the least number proteins at 4. (iTOL) (version 5) (accessed on 20 March 2022) [41]. Additionally, the OsHDZIP proteins were clustered into four subfamilies based on their phylogenetic relationships: HDZIPI, HDZIPII, HDZIPIII, and HDZIPIV, and the number of OsHDZIP proteins was measured in each subfamily. Subfamily I accounted for 13 proteins, followed by subgroup II with 12 proteins, subfamily IV with 11 proteins, and subfamily III with the least number proteins at 4. Figure 2. Phylogenetic analysis of HDZIP: the phylogenetic tree was generated using the aminoacid sequences of selected HDZIPs via the maximum likelihood tree method. All Oryza sativa HDZIPs, Arabidopsis thaliana, and Cucumis sativus, with their counterparts, were classified into four subfamilies, and the final tree was displayed using the Interactive Tree Of Life (iTOL) (version 5). Phylogenetic analysis of HDZIP: the phylogenetic tree was generated using the amino-acid sequences of selected HDZIPs via the maximum likelihood tree method. All Oryza sativa HDZIPs, Arabidopsis thaliana, and Cucumis sativus, with their counterparts, were classified into four subfamilies, and the final tree was displayed using the Interactive Tree Of Life (iTOL) (version 5). An Interactive Network of OsHDZIP Protein The protein interaction analysis revealed various other proteins interacting with the OsHDZIP orthologous gene OsHDZIP2 ( Figure 3). The OsHDZIP2 protein from the homeodomain-leucine zipper gene family plays a crucial role in plant growth and stress response. For instance, the YUCCA pathway is the most important and well-characterized pathway that plants deploy to produce auxin, which is the essential hormone in plant development and stress response [42]. In addition, our reference protein, OsHDZIP2, was found to be highly interactive with rice LAZY1 proteins and had an essential role in auxin biosynthesis, the predicted functional partners of OsHDZIP proteins (Table S4). OsHDZIP orthologous gene OsHDZIP2 ( Figure 3). The OsHDZIP2 protein from the homeodomain-leucine zipper gene family plays a crucial role in plant growth and stress response. For instance, the YUCCA pathway is the most important and well-characterized pathway that plants deploy to produce auxin, which is the essential hormone in plant development and stress response [42]. In addition, our reference protein, OsHDZIP2, was found to be highly interactive with rice LAZY1 proteins and had an essential role in auxin biosynthesis, the predicted functional partners of OsHDZIP proteins (Table S4). Prediction of the Potential MicroRNAs Targeting OsHDZIP Genes MicroRNAs are a class of small noncoding regulatory RNAs that control gene expression by directing target mRNA cleavage or translational repression [43]. In recent decades, several investigations have reported that the miRNAs regulate numerous stresses, plant development, and signal transduction. Therefore, to better understand the regulatory mechanism of miRNAs involved in the regulation of OsHDZIP genes, 56 putative miRNAs targeting four OsHDZIP genes were identified, as shown in the network illustration ( Figure 4). Prediction of the Potential MicroRNAs Targeting OsHDZIP Genes MicroRNAs are a class of small noncoding regulatory RNAs that control gene expression by directing target mRNA cleavage or translational repression [43]. In recent decades, several investigations have reported that the miRNAs regulate numerous stresses, plant development, and signal transduction. Therefore, to better understand the regulatory mechanism of miRNAs involved in the regulation of OsHDZIP genes, 56 putative miR-NAs targeting four OsHDZIP genes were identified, as shown in the network illustration ( Figure 4). Figure 4. Targeted miRNA sites of the OsHDZIP subfamily III genes in rice represents the functional network assembly of the HDZIP genes in Oryza sativa. The subfamily III (i.e., OsHDZIP9, OsHDZIP13, OsHDZIP37, and OsHDZIP40) were mapped to the co-expression database. This analysis revealed 56 unique miRNAs that exhibited various physical/functional interactions, and a network was then assembled based on these interactions. Gene Ontology (GO) Analysis The Gene Ontology (GO) enrichment pathway analysis showed various key functions of the OsHDZIP genes in Oryza sativa. Three functional predictions were analyzed into biological, molecular, and cellular processes [27]. According to the predicted biological processes, OsHDZIP genes play a crucial role in growth-related activities via hormonal and metabolic modulation ( Figure 6). Additionally, the response to external stimuli and the cellular analysis proved that 36 of the OsHDZIP genes were located in the nucleus, 4 genes resided in the plasma membrane, whereas OsHDZIP28 and OsHDZIP37 resided in the cytoplasm and may be involved in many cellular-based activities. At the same time, many molecular predictions regarding OsHDZIP genes indicated that they are involved in DNA-binding activities. Gene Ontology (GO) Analysis The Gene Ontology (GO) enrichment pathway analysis showed various key functions of the OsHDZIP genes in Oryza sativa. Three functional predictions were analyzed into biological, molecular, and cellular processes [27]. According to the predicted biological processes, OsHDZIP genes play a crucial role in growth-related activities via hormonal and metabolic modulation ( Figure 6). Additionally, the response to external stimuli and the cellular analysis proved that 36 of the OsHDZIP genes were located in the nucleus, 4 genes resided in the plasma membrane, whereas OsHDZIP28 and OsHDZIP37 resided in the cytoplasm and may be involved in many cellular-based activities. At the same time, many molecular predictions regarding OsHDZIP genes indicated that they are involved in DNA-binding activities. Identified cis-Regulatory Elements in OsHDZIP Genes The OsHDZIP in silico analysis revealed that the upstream region of HDZIP genes possesses various stress, hormonal, and growth receptive cis-regulatory elements. Here, we identified twenty-two cis-regulatory elements. Among these 22 cis-elements, 13 were related to stress and growth changes, and 9 were responsive to hormonal changes (Table S2). Further, the hormonal responsive cis-regulatory elements, such as AuxRR-Core (auxin-responsive), the GCTCZ-motif (MeJA responsive), and the TCA-salicylic acid element, were responsive. Moreover, ABRE (abscisic acid responsive) was found in most of the OsHDZIP genes; meanwhile, the GARE-motif and P-box (a gibberellin-responsive ciselement) were identified in several genes upstream regions. Following a drought, the anaerobic induction-responsive cis-elements, MYB and ARE, and the light-responsive ciselements, such as the ZTCT-motif, G-box, and ACE, were also found in the majority of OsHDZIP genes. In addition, various stress-and growth-responsive cis-elements, such as CAT-box (involved in meristem expression), TC-rich repeats (stress-and defense-responsive cis-regulatory elements), MBS1 (regulates flavonoid biosynthesis gene expression), and LTR (low-temperature responsive), were observed in the promoter region of OsHDZIP genes. These clustering cis-regulatory elements in the promoter region of OsHDZIP genes imply their role in regulating gene expression during different growth stages and under environmental stimuli; the same results were reported in [8]. Gene Structure, and Motif Patterns of OsHDZIP Genes We identified the exons-introns distribution using the CDS and genomic sequence of OsHDZIP genes. The OsHDZIP family genes contain multiple exons and have varied intron lengths (Figure 7). Among these, the OsHDZIP subfamily IV and subgroup III were observed with the highest number of intron and exon distribution in which all members (i.e., OsHDZIP9, OsHDZIP13, OsHDZIP37, and OsHDZIP40) had 20 exons and 17 introns; followed by subfamily IV in which the OsHDZIP3 and OsHDZIP17 had 13 exons and 10 introns; OsHDZIP16, OsHDZIP20, OsHDZIP32, and OsHDZIP39 had 11 exons and 8 introns in total. However, OsHDZIP2 had 10 exons and 9 introns, and with OsHDZIP24, 10 exons and 7 introns were observed. Furthermore, the third-highest number of exons and introns was recorded in subfamily II in which the OsHDZIP1, OsHDZIP15, OsHDZIP25, OsHDZIP27, and OsHDZIP30 had six exons and three introns, followed by OsHDZIP34 and OsHDZIP38 with five exons and two introns; meanwhile, OsHDZIP18, OsHDZIP19, and OsHDZIP30 had four exons and two introns; OsHDZIP21 had three exons and two introns; only OsHDZIP4 had the least number with two exons and a single intron. Additionally, the OsHDZIP subfamily I was counted as having the least number of introns and Identified cis-Regulatory Elements in OsHDZIP Genes The OsHDZIP in silico analysis revealed that the upstream region of HDZIP genes possesses various stress, hormonal, and growth receptive cis-regulatory elements. Here, we identified twenty-two cis-regulatory elements. Among these 22 cis-elements, 13 were related to stress and growth changes, and 9 were responsive to hormonal changes (Table S2). Further, the hormonal responsive cis-regulatory elements, such as AuxRR-Core (auxinresponsive), the GCTCZ-motif (MeJA responsive), and the TCA-salicylic acid element, were responsive. Moreover, ABRE (abscisic acid responsive) was found in most of the OsHDZIP genes; meanwhile, the GARE-motif and P-box (a gibberellin-responsive ciselement) were identified in several gene's upstream regions. Following a drought, the anaerobic induction-responsive cis-elements, MYB and ARE, and the light-responsive ciselements, such as the ZTCT-motif, G-box, and ACE, were also found in the majority of OsHDZIP genes. In addition, various stress-and growth-responsive cis-elements, such as CAT-box (involved in meristem expression), TC-rich repeats (stress-and defense-responsive cis-regulatory elements), MBS1 (regulates flavonoid biosynthesis gene expression), and LTR (low-temperature responsive), were observed in the promoter region of OsHDZIP genes. These clustering cis-regulatory elements in the promoter region of OsHDZIP genes imply their role in regulating gene expression during different growth stages and under environmental stimuli; the same results were reported in [8]. Gene Structure, and Motif Patterns of OsHDZIP Genes We identified the exons-introns distribution using the CDS and genomic sequence of OsHDZIP genes. The OsHDZIP family genes contain multiple exons and have varied intron lengths (Figure 7). Among these, the OsHDZIP subfamily IV and subgroup III were observed with the highest number of intron and exon distribution in which all members (i.e., OsHDZIP9, OsHDZIP13, OsHDZIP37, and OsHDZIP40) had 20 exons and 17 introns; followed by subfamily IV in which the OsHDZIP3 and OsHDZIP17 had 13 exons and 10 introns; OsHDZIP16, OsHDZIP20, OsHDZIP32, and OsHDZIP39 had 11 exons and 8 introns in total. However, OsHDZIP2 had 10 exons and 9 introns, and with OsHDZIP24, 10 exons and 7 introns were observed. Furthermore, the third-highest number of exons and introns was recorded in subfamily II, in which the OsHDZIP1, OsHDZIP15, OsHDZIP25, OsHDZIP27, and OsHDZIP30 had six exons and three introns, followed by OsHDZIP34 and OsHDZIP38 with five exons and two introns; meanwhile, OsHDZIP18, OsHDZIP19, and OsHDZIP30 had four exons and two introns; OsHDZIP21 had three exons and two introns; only OsHDZIP4 had the least number with two exons and a single intron. Additionally, the OsHDZIP subfamily I was counted as having the least number of introns and exons distributed in which OsHDZIP11, OsHDZIP25, OsHDZIP26, OsHDZIP28, OsHDZIP29, and OsHDZIP31 accounted for five exons and two introns; followed by OsHDZIP6, OsHDZIP10, OsHDZIP14, OsHDZIP33, OsHDZIP35, and OsHDZIP36 with four exons and a single intron; a single gene OsHDZIP8 was observed with six exons and three introns. A total of 40 conserved motifs were discovered using the MEME online server (Version 5.4.1) (accessed on 10 April 2022) [44], and they were found to be appropriate for explaining the HDZIPs gene's structure (Figure 8) A total of 40 conserved motifs were discovered using the MEME online server (Version 5.4.1) (accessed on 10 April 2022) [44], and they were found to be appropriate for explaining the HDZIPs gene's structure (Figure 8). Among the 40 OsHDZIP genes, the OsHDZIP7, OsHDZIP8, OsHDZIP9, OsHDZIP16, OsHDZIP17, OsHDZIP22, OsHDZIP23, and OsHDZIP32 were counted as having the highest number with 10 motifs; followed by OsHDZIP2 with 8 motifs and OsHDZIP3 with 7 motifs. Furthermore, OsHDZIP9, OsHDZIP13, OsHDZIP24, OsHDZIP34, OsHDZIP37, and OsHDZIP40 had five motifs. However, the OsHDZIP1, OsHDZIP4, OsHDZIP5, OsHDZIP12, OsHDZIP15, OsHDZIP18, OsHDZIP19, OsHDZIP21, OsHDZIP27, OsHDZIP30, and OsHDZIP38 in addition to all subfamily I members (i.e., OsHDZIP6, OsHDZIP8, OsHDZIP10, OsHDZIP11, OsHDZIP14, OsHDZIP25, OsHDZIP26, OsHDZIP28, OsHDZIP29, OsHDZIP31, OsHDZIP33, OsHDZIP35, and OsHDZIP36) counted has having three motifs in total. Bioengineering 2022, 9, x FOR PEER REVIEW 13 of 25 Microarray Expression Analysis of HDZIP Genes in Developmental Stages We examined the different developmental stages and tissue-specific expressions to study the biological roles of OsHDZIP genes in plant growth and development based on a set of microarray data obtained from the RiceXPro expression database (version 3.0) (accessed on 13 April 2022) [45]. The microarray expression data analysis of the rice OsHDZIP genes family is presented as a heatmap, with blue to red colors reflecting the expression pattern. In twelve tissues (i.e., leaf blade, leaf sheath, roots, stem, inflorescences, anther, pistil, lama, plea, ovary, embryo, and endosperm), the OsHDZIP gene family members showed moderate to high expressions, respectively (Figure 9). Among the twelve tissues, OsHDZIP27 showed dominant expression in leaf blade, leaf sheath, and roots, followed by OsHDZIP18, which showed high transcription in roots. Furthermore, OsHDZIP1, OsHDZIP3, OsHDZIP5, OsHDZIP13, OsHDZIP33, OsHDZIP34, OsHDZIP35, OsHDZIP39, and OsHDZIP40 showed moderate expression in leaf blade, root, inflorescences, and endosperms. All developmental stages were observed with no or extremely low transcripts, particularly in embryos and endosperm. On the contrary, the leaf blade, root, inflorescence, and anther had high transcription levels. Additionally, the developmental stages, including pistil, lama, and plea, revealed the response of many OsHDZIP gene expressions. This expression analysis revealed the essential role of the OsHDZIP gene family in developmental stages. Microarray Expression Analysis of HDZIP Genes in Developmental Stages We examined the different developmental stages and tissue-specific expressions to study the biological roles of OsHDZIP genes in plant growth and development based on a set of microarray data obtained from the RiceXPro expression database (version 3.0) (accessed on 13 April 2022) [45]. The microarray expression data analysis of the rice OsHDZIP genes family is presented as a heatmap, with blue to red colors reflecting the expression pattern. In twelve tissues (i.e., leaf blade, leaf sheath, roots, stem, inflorescences, anther, pistil, lama, plea, ovary, embryo, and endosperm), the OsHDZIP gene family members showed moderate to high expressions, respectively (Figure 9). Among the twelve tissues, OsHDZIP27 showed dominant expression in leaf blade, leaf sheath, and roots, followed by OsHDZIP18, which showed high transcription in roots. Furthermore, OsHDZIP1, Os-HDZIP3, OsHDZIP5, OsHDZIP13, OsHDZIP33, OsHDZIP34, OsHDZIP35, OsHDZIP39, and OsHDZIP40 showed moderate expression in leaf blade, root, inflorescences, and endosperms. All developmental stages were observed with no or extremely low transcripts, particularly in embryos and endosperm. On the contrary, the leaf blade, root, inflorescence, and anther had high transcription levels. Additionally, the developmental stages, including pistil, lama, and plea, revealed the response of many OsHDZIP gene expressions. This expression analysis revealed the essential role of the OsHDZIP gene family in developmental stages. Expression Analysis of OsHDZIP Genes under Salinity Salinity is an important stress that hinders plant growth and yield. The injurious effects of salinity can be noted over the whole plant. To obtain insightful knowledge regarding HDZIP genes' responsiveness to high salinity, the transcriptomic expression data were obtained from the publicly available in the NCBI (GSE102152) (accessed on 17 April 2022) [46]. In the heatmap, the dark orange color on the scale bar represents high expression, whereas the light orange color is moderate, and the blue color genes have no expressions ( Figure 10A). Among Expression Analysis of OsHDZIP Genes under BPH, SSB, and SSB_BPH The transcriptomic expression data were obtained from the NCBI (GSE167872) (accessed on 11 May 2022) [47]. The analysis provided insightful predictions of OsHDZIP genes involved in rice plant defense against brown planthopper (BPH), rice striped stem borer (SSB), Chilo suppressalis, and combined SSB_BPH stresses ( Figure 10B). However, in response to BPH, SSB, and the combined stress of SSB_BPH, only two genes, OsHDZIP4 and OsHDZIP10, had dominant expressions. The expression analysis revealed the role of the OsHDZIP gene family in plants' defense against pest infestations. Expression Analysis of OsHDZIP Genes under Cnaphalocrocis medinalis The transcriptomic expression data were taken from the NCBI (GSE159259) (accessed on 15 May 2022) [48]. The analysis provided insightful predictions of the OsHDZIP gene's role in rice plant defense against rice leaf folder (RLF) Cnaphalocrocis medinalis Guenée (Lepidoptera: Crambidae) ( Figure 10C). Further, the response of the HDZIP gene family was moderate; among the 40 OsHDZIP genes, only OsHDZIP4 and OsHDZIP10 were highly expressed at all time points (i.e., 0, 6, 12, and 24 h), and the rest of the genes did not show transcription. This expression analysis suggests that the HDZIP gene family plays a role in the rice defense system against biotic stress. Expression Analysis of OsHDZIP Genes under Brassinosteroids and Jasmonic Acid The expression data of the OsHDZIP genes family under Jasmonic acid (JA) and brassinosteroids (BRs) were obtained from the RiceXPro expression database (version 3.0) (accessed on 25 May 2022) [45]. The microarray expression data analysis of the rice HDZIP genes family is presented as a heatmap, with blue to red colors reflecting the expression Expression Analysis of OsHDZIP Genes under BPH, SSB, and SSB_BPH The transcriptomic expression data were obtained from the NCBI (GSE167872) (accessed on 11 May 2022) [47]. The analysis provided insightful predictions of OsHDZIP genes involved in rice plant defense against brown planthopper (BPH), rice striped stem borer (SSB), Chilo suppressalis, and combined SSB_BPH stresses ( Figure 10B). However, in response to BPH, SSB, and the combined stress of SSB_BPH, only two genes, OsHDZIP4 and OsHDZIP10, had dominant expressions. The expression analysis revealed the role of the OsHDZIP gene family in plants' defense against pest infestations. Expression Analysis of OsHDZIP Genes under Cnaphalocrocis medinalis The transcriptomic expression data were taken from the NCBI (GSE159259) (accessed on 15 May 2022) [48]. The analysis provided insightful predictions of the OsHDZIP gene's role in rice plant defense against rice leaf folder (RLF) Cnaphalocrocis medinalis Guenée (Lepidoptera: Crambidae) ( Figure 10C). Further, the response of the HDZIP gene family was moderate; among the 40 OsHDZIP genes, only OsHDZIP4 and OsHDZIP10 were highly expressed at all time points (i.e., 0, 6, 12, and 24 h), and the rest of the genes did not show transcription. This expression analysis suggests that the HDZIP gene family plays a role in the rice defense system against biotic stress. Differential Expression of OsHDZIP Genes in Response to Nilaparvata lugens, Laodelphax striatellus Infestations, and JGM Spraying To further investigate the response of OsHDZIP genes under BPH and SBPH infestations and botanical fungicide JGM applications, herein, we performed quantitative real-time polymerase chain reaction (qRT-PCR) to analyze the expression patterns at three time points over 2 day (2D), 4D, and 8D long treatments during BPH and SBPH infestations and JGM spraying (Figures 13-15). The results revealed that the response of ten candidate genes were expressed under all stress conditions. The BPH is a severe rice pest and causes a huge loss in annual rice production. The expression analysis revealed the important aspect of the O. sativa HDZIP transcription factor through moderate and high expressions. Among the eight candidate genes (two genes from each subfamily) at three different time points (Figure 13), moderate to high expression patterns were found, particularly for OsHDZIP20, which had the highest expression of six-fold over 2 days of infestation followed by 4 days with four-fold expression and two-fold after 8 days of BPH infestation. However, OsHDZIP03, OsHDZIP28, and OsHDZIP40 were found with low transcription; in addition, OsHDZIP04, OsHDZIP10, OsHDZIP15, and OsHDZIP37 were found with moderate expressions. Differential Expression of OsHDZIP Genes in Response to Nilaparvata lugens, Laodelphax striatellus Infestations, and JGM Spraying To further investigate the response of OsHDZIP genes under BPH and SBPH infestations and botanical fungicide JGM applications, herein, we performed quantitative real-time polymerase chain reaction (qRT-PCR) to analyze the expression patterns at three-time points over 2 days (2D), 4D, and 8D long treatments during BPH and SBPH infestations and JGM spraying (Figures 13-15). The results revealed that the response of ten candidate genes were expressed under all stress conditions. The BPH is a severe rice pest and causes a huge loss in annual rice production. The expression analysis revealed the important aspect of the O. sativa HDZIP transcription factor through moderate and high expressions. Among the eight candidate genes (two genes from each subfamily) at three different time points (Figure 13), moderate to high expression patterns were found, particularly for OsHDZIP20, which had the highest expression of six-fold over 2 days of infestation, followed by 4 days with four-fold expression and twofold after 8 days of BPH infestation. However, OsHDZIP03, OsHDZIP28, and OsHDZIP40 were found with low transcription; in addition, OsHDZIP04, OsHDZIP10, OsHDZIP15, and OsHDZIP37 were found with moderate expressions. The SBPH is the second most important pest of rice after BPH, which causes a drastic loss to rice plants and their production. To validate the OsHDZIP transcription factor response through qRT-PCR under SBPH infestation, we used eight candidate genes at three time points (Figure 14). Among them, OsHDZIP03 displayed moderate expressions at 2 and 4 days of treatment and had the highest expression at 8 days of infestation up to 10-fold. This was followed by OsHDZIP40, with a six-fold expression at eight days. Furthermore, OsHDZIP04, OsHDZIP10, OsHDZIP15, OsHDZIP28, and OsHDZIP37 were found to have moderate expressions; however, OsHDZIP20 was found to have the lowest expression pattern. JGM is a synthetic antibiotic applied to treat rice sheath blight disease, and it is also reported to enhance BPH fecundity; herein, we performed qRT-PCR analysis to validate the OsHDZIP gene family's response under JGM treatment ( Figure 15). The obtained results revealed that OsHDZIP genes, such as OsHDZIP10, OsHDZIP20, and OsHDZIP40, present upregulated expressions, whereas OsHDZIP04, OsHDZIP15, and OsHDZIP37 had moderate expressions and a single gene, and OsHDZIP03, OsHDZIP04, and OsHDZIP10 had negligible expression patterns. These results suggest that the OsHDZIP participated in rice immunity regulations against JGM spraying applications. The SBPH is the second most important pest of rice after BPH, which causes a drastic loss to rice plants and their production. To validate the OsHDZIP transcription factor response through qRT-PCR under SBPH infestation, we used eight candidate genes at three time points (Figure 14). Among them, OsHDZIP03 displayed moderate expressions at 2 and 4 days of treatment and had the highest expression at 8 days of infestation up to 10fold. This was followed by OsHDZIP40, with a six-fold expression at eight days. Furthermore, OsHDZIP04, OsHDZIP10, OsHDZIP15, OsHDZIP28, and OsHDZIP37 were found to have moderate expressions; however, OsHDZIP20 was found to have the lowest expression pattern. The SBPH is the second most important pest of rice after BPH, which causes a drastic loss to rice plants and their production. To validate the OsHDZIP transcription factor response through qRT-PCR under SBPH infestation, we used eight candidate genes at three time points (Figure 14). Among them, OsHDZIP03 displayed moderate expressions at 2 and 4 days of treatment and had the highest expression at 8 days of infestation up to 10fold. This was followed by OsHDZIP40, with a six-fold expression at eight days. Furthermore, OsHDZIP04, OsHDZIP10, OsHDZIP15, OsHDZIP28, and OsHDZIP37 were found to have moderate expressions; however, OsHDZIP20 was found to have the lowest expression pattern. reported to enhance BPH fecundity; herein, we performed qRT-PCR analysis to validate the OsHDZIP gene family's response under JGM treatment ( Figure 15). The obtained results revealed that OsHDZIP genes, such as OsHDZIP10, OsHDZIP20, and OsHDZIP40, present upregulated expressions, whereas OsHDZIP04, OsHDZIP15, and OsHDZIP37 had moderate expressions and a single gene, and OsHDZIP03, OsHDZIP04, and OsHDZIP10 had negligible expression patterns. These results suggest that the OsHDZIP participated in rice immunity regulations against JGM spraying applications. Figure 15. Represents the differential expression analysis of OsHDZIP genes (i.e., OsHDZIP3, OsHDZIP4, OsHDZIP10, OsHDZIP15, OsHDZIP20, OsHDZIP28, OsHDZIP37, and OsHDZIP40) under JGM spraying treatment in Oryza sativa. Discussion Plants counter various biotic and abiotic stresses during their life cycle, impairing their biochemical and physiological processes. The plant develops its mechanism to tackle these adverse conditions, such as activation of stress-related TFs and intensified metabolic activities. HDZIP TFs are distributed widely across the plant kingdom and have been recognized for their role in the growth and developmental activities and response to environmental stimuli [7,8,49]. OsHDZIP Genes Are Widely Distributed in the Rice Genome Phylogenetic trees represent an established method for determining evolutionary changes and functional relationships [50]. The HDZIP protein family has been identified in many species, from mosses to higher plants, such as Ceratopteris richardii [51] and Physcomitrella patens [52], angiosperms, and gymnosperms [53]. In our study, we conducted a genome-wide survey to determine the phylogenetic relationships and investigate their potential role via qRT-PCR expression analysis; there was a total of 127 HDZIP proteins from rice (40 OsHDZIPs), Cucumis sativus (40 CsHZIPs) [8], and Arabidopsis thaliana (47 AtHDZIPs) [40]. These proteins were further divided into four subfamilies ( Figure 2). The obtained results aligned with previously reported studies on Arabidopsis thaliana and Cucumis sativus [8,40]. The HDZIP III subfamily was more highly conserved than the other subfamilies. On the other hand, HDZIP I, HDZIP II, and HDZIP IV varied in the numbers of different species [54][55][56]. The conserved number of motifs in OsHDZIP genes was analyzed, and the results stated that the HDZIP III and HDZIP IV subfamilies exhibited the highest number of motifs of the other two subfamilies (Figure 8 Discussion Plants counter various biotic and abiotic stresses during their life cycle, impairing their biochemical and physiological processes. The plant develops its mechanism to tackle these adverse conditions, such as activation of stress-related TFs and intensified metabolic activities. HDZIP TFs are distributed widely across the plant kingdom and have been recognized for their role in the growth and developmental activities and response to environmental stimuli [7,8,49]. OsHDZIP Genes Are Widely Distributed in the Rice Genome Phylogenetic trees represent an established method for determining evolutionary changes and functional relationships [50]. The HDZIP protein family has been identified in many species, from mosses to higher plants, such as Ceratopteris richardii [51] and Physcomitrella patens [52], angiosperms, and gymnosperms [53]. In our study, we conducted a genome-wide survey to determine the phylogenetic relationships and investigate their potential role via qRT-PCR expression analysis; there was a total of 127 HDZIP proteins from rice (40 OsHDZIPs), Cucumis sativus (40 CsHZIPs) [8], and Arabidopsis thaliana (47 AtHDZIPs) [40]. These proteins were further divided into four subfamilies (Figure 2). The obtained results aligned with previously reported studies on Arabidopsis thaliana and Cucumis sativus [8,40]. The HDZIP III subfamily was more highly conserved than the other subfamilies. On the other hand, HDZIP I, HDZIP II, and HDZIP IV varied in the numbers of different species [54][55][56]. The conserved number of motifs in OsHDZIP genes was analyzed, and the results stated that the HDZIP III and HDZIP IV subfamilies exhibited the highest number of motifs of the other two subfamilies (Figure 8). A motif's position in each subfamily was highly conserved, supporting their evolutionary classification into different subfamilies [40,55,57]. OsHDZIP Genes Have Tissue Specificity and Play an Integral Role in the Development of Oryza sativa The HDZIP I TFs were documented for their involvement in plant developmental processes such as root growth and stem elongation, leaf morphology, flowering induction, and pollen hydration [8]. The OsHDZIP gene family members, including OsHDZIP18 and OsHDZIP27, displayed dominant expression in all tested organs of O. sativa in the developmental stages. Similar results were reported for the OsHDZIP18, OsHDZIP27 homologs CsHDZ02, and CsHDZ33, respectively. These results indicate the potential importance of OsHDZIP18, OsHDZIP27, and other subfamily members in regulating the growth and developmental activities of O. sativa [8]. Members of the HDZIP III subfamily were shown to be important for appropriate morphogenesis, embryo development, giving support during the commencement of lateral organ development, control of the shoot apical meristem, and water and nutrient transport throughout the plant's overall body [6,8,59,60]. Similarly, all of the genes in the HDZIP III subfamily showed transcription in practically all of the investigated tissues (Figure 9), suggesting that they may be important in sustaining rice's normal growth activities. Members of the HDZIP IV subfamily have been implicated in developing root hairs, trichome production, anthocyanin biosynthesis, and flowering regulation in plants. [49,61]. OsHDZIP11 and OsHDZIP37 were expressed in all the tissues (Figure 9), suggesting that subfamily HDZIP IV plays a potential role in the developmental activity regulations of O. sativa. OsHDZIP Genes Regulate Plant Response to Chewing Insects and other Abiotic Stresses The homeodomain-leucine zipper transcription factors play a potential role in a plant's growth and development and are highly responsive to the pronounced effects of stresses including drought [62], cold [63], salinity [64], heat stress [65], heavy metal [66], flooding stress [67], and nutrient stress (iron deficiency) [68]. The BPH and SBPH are serious rice pests inflicting damage on a massive scale across Asia [19,21]. Their nymphs and adults cause direct damage by feeding on phloem sap from the tillering to the milking stages of rice and, in the process, transmitting viral pathogens, such as rice ragged stunt virus (RRSV) and rice grassy stunt virus (RGSV) [69,70]. Several studies have identified the potential role of the HDZIP gene family. Until now, no comprehensive study has been conducted to unfold the response of OsHDZIP genes to BPH and SBPH infestations. Herein, we performed qRT-PCR on eight candidate genes (two from each subfamily) to validate their expressions. The results revealed the potential role of OsHDZIP genes. Furthermore, among these eight candidate genes, OsHDZIP20 was observed with dominant expressions, and the rest of the genes (i.e., OsHDZIP04, Os-HDZIP10, OsHDZIP15, and OsHDZIP37) had moderate expressions, whereas OsHDZIP03, OsHDZIP28, and OsHDZIP40 had low transcription (Figures 13 and 14). Collectively, the response of these genes uncovered the crucial role of OsHDZIPs in rice plant immunity in response to insect pest infestations. However, a functional study is required to investigate the underlying mechanisms inducing the HDZIP gene family response against insect pests. The antibiotic JGM was developed in recent decades in China, and it is usually applied two to three times in rice fields to treat rice sheath blight disease (Rhizoctonia solani) and fungal infections [25]. Moreover, JGM is reported to be an enhanced controlling agent of the sheath blight disease; however, JGM also has consequences because of its potential role in inducing BPH fecundity [25]. For instance, JGM was applied to the rice plants at the rate of 200 parts per million (ppm), and the results revealed that JGM increased the rice's resistance to rice sheath blight disease by disrupting the fungal cell wall and reduced sporulation [25]. In addition, JGM was also reported to enhance BPH fecundity [25]. In the current study, the OsHDZIP gene showed a differential expression pattern where the candidate genes from each subfamily, including OsHDZIP40, OsHDZIP28, and OsHDZIP10, were dominantly expressed, whereas the OsHDZIP3, OsHDZIP4, OsHDZIP15, and OsHDZIP37 were moderately expressed ( Figure 15). We speculate that these OsHDZIP genes are essential for inducing plant immunity to fungal pathogens in rice plants. Further studies are required to elucidate the underlying mechanisms of OsHDZIP genes boosting rice immune responses under various insect pest infestations. The osa-miR166 family targeted four OsHDZIP genes, and miR166 has been reported to be involved in drought stress in maize [72], cowpea [73], soybean (Glycine max) seed development [74], peanut disease-resistance [75], and plant growth, development, and stress response in apple [77]. miR444 has been reported to be involved in the cadmium stress regulation of rice [78]. Previously, Anca et al. (2012) reported that the expression profiles of three miRNAs (i.e., osa-miR414, osa-miR408, and osa-miR164) targeting the OsABP, OsDBH and OsDSHCT genes in rice enhanced the miRNAs response against salinity stress [79]. Our results found that four OsHDZIP genes (i.e., OsHDZIP9, OsHDZIP13, OsHDZIP37, and OsHDZIP40) were targeted by these miRNAs providing another aspect of these miRNAs' potential role in regulating rice response under salinity ( Figure 4) and their target sites ( Figure 5). Collectively, these findings suggest that miRNAs may play vital roles in numerous growth and developmental processes and stress regulation processes by altering the transcriptional level of HDZIP genes in O. sativa. Among these 28 miRNAs families targeting the HDZIP gene family, several miRNAs predicted expression levels and functions have been identified, whereas various miRNAs targeting OsHDZIP genes that regulate biotic/abiotic stress responses and other important agronomic traits remain to be clarified. Expression Analysis of HDZIP Gene Family under Hormonal Applications JA is synthesized from linolenic acid through the action of several enzymes in plant chloroplast membranes, and the current evidence indicates that it induces resistance against necrotrophic pathogens and chewing herbivores [80,81]. In comparison, the microarray data showed that the expression of OsHDZP3, OsHDZP9, and OsHDZP28 was enhanced, suggesting their possible involvement in JA-mediated rice resistance against BPH ( Figure 11). BRs are a group of polyhydroxylated steroidal phytohormones that are mandatory for plant development, growth, and productivity [82]. In addition to their significant involvement in growth-related activities, BRs are a key stress hormone [27]. Herein, the upregulated expressions of OsHDZIP15 and OsHDZIP20 suggest the essential role of these genes and their possible participation in the immunity regulation of the rice plant; however, functional studies are required to unfold the underlying mechanism of these crucial hormones (Figure 12). Conclusions Forty OsHDZIP family TFs were identified in the rice genome database and classified into four subfamilies based on their domain and structural properties. Furthermore, heatmap analysis revealed the expression of OsHDZIP TFs in distinct O. sativa tissues. The expression of OsHDZIP genes under salinity and hormone stress indicated that they might play a role in modulating O. sativa resistance. Furthermore, differential expression patterns were found under JA and BR treatments, indicating that OsHDZIP genes may be critical in hormonal-mediated rice immunity against BPH and SBPH. On the other hand, stress control is a complicated system, and our results suggested the potential application of HDZIP biomarkers in developing stress-resilient rice lines. These results provide the missing role of HDZIP genes in regulating rice immunity against SBPH and particularly BPH. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/bioengineering9080398/s1, Table S1: Primer sequences for RT-qPCR, Table S2: Types and number of cis-acting regulatory elements analysis involved in growth, development, and stress and hormonal response; Table S3: Gene structure and domain distribution; Table S4: Predicted functional partners of OsHDZIP proteins; Table S5: A detailed list of miRNAs. Author Contributions: S.A. and L.G. designed the research; S.A. and Y.C. conducted the experimental work; A.Z.S., H.Z., H.W. and C.X. contributed to the preparation of biological materials; S.A. performed bioinformatics analysis and wrote the main manuscript. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2021-09-09T20:36:08.155Z
2021-08-01T00:00:00.000
238854152
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.sjbs.2021.07.084", "pdf_hash": "4249e321d42571984e055268bfa028c8317ac5d5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41701", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "953ea054664457423d711a081abfddcb312d24f1", "year": 2021 }
pes2o/s2orc
Macro and micro-elements concentrations in Calligonum comosum wild grazing plant through its growth period In this study, the change in the content of the macro and micro elements in the growing wild grazing plant of Calligonum comosum was tracked at the Research and Training Station of King Faisal University in Al-Hassa Governorate, Kingdom of Saudi Arabia. Mineral elements were estimated in aerial parts (plant as a whole, leaves and stem) from January-April 2020. The results showed that the concentration of nitrogen, phosphorus and potassium in the plant as a whole plant > leaves > roots, while the concentrations of calcium, magnesium, manganese, zinc and copper elements in the leaves was higher than other parts whereas the concentrations of these elements of whole plant were higher than the concentrations in roots. The results showed that the plant contents of nitrogen, potassium and zinc were the highest in March, while the concentrations of phosphorus, calcium, iron and copper were in February. The concentrations of magnesium, manganese and copper was the highest in January and April respectively. The values of nitrogen, phosphorous, potassium, calcium, magnesium, iron, manganese, zinc and copper ranged from 11.1 to 18.4 g kg−1, 4.17–2.33 g kg−1, 13.73–18.97 g kg−1, 24.50–28.90 g kg−1, 10.40–12.30 gkg−1, 1500–1677 mg kg−1, 45.45–49.29 mg kg−1, 70.70–177.23 mg kg−1, 16.78–73.46 mg kg−1, respectively. Furthermore, the results exhibited that the lowest values of the elements appeared in the plant roots in April. As well as, the distribution of the elements followed the normal life curve from January to April. Besides that, the evaluated elements satisfy the needs of the grazing animals' life in which this type of plant grows. Introduction The balance between nutrients and minerals is important for the growth and health of animals, especially wild animals such as goats, sheep and camels. Minerals that are very important for all living organizations play an important role in improving the activity of the rumen and increasing the quality of the use of feed in ruminants (Weiss, 2008). Providing a complete diet, including proteins, fats, vitamins, minerals and water, is important for wild animals that live in the desert, such as camels, goats and sheep. Reducing dependence on imported feed is a goal pursued by the Kingdom of Saudi Arabia (Alzarah, 2020). Trace minerals are important for the metabolic activities of livestock and poultry. These functions aid in the growth and development of cattle and poultry, as well as immunological function and reproductive performance. Minerals are necessary for the support of numerous enzyme systems, in addition to boosting reproductive and production performance characteristics. Trace minerals are essential in the metabolic activities of livestock and poultry. These responsibilities aid in the development and growth of cattle and poultry, as well as immunological function and reproductive success. Minerals are essential for the support of numerous enzymatic systems, in addition to increasing reproductive and production efficiency metrics. (https://www.kemin.com/in/en/markets/animal/nutritionalefficiency/mineral-nutrition). Furthermore, minerals combine with proteins, lipids, and other substances to build soft and hard tissues of the body, and they have a specific influence on osmotic pressure, acid-base balance, and nerve and muscle stimulation by connecting the structure of enzyme and hormone systems (Eren, 2009). According to Altıntas (2013), the absence or excess of minerals in animal feed, as well as insufficient or excessive mineral consumption by animals, can have negative impacts on the reproductive, developmental, and immunological systems of animals, in addition to their production. Because the minerals substances play important role in metabolic system of animal, the animal will be taken from grazed plants (Kutlu et al., 2005;Gokkus et al., 2013). Plant mineral contents are one of the mostly dependent on the ecosystem characteristics such as species of pastures, soil types, climate, phonological period and abiotic factors (Underwood and Suttle, 1999). Mineral elements such as Ca, P, K, Fe and Cu are found in high quantitmilleries in pastoral plants that grow in the spring, whereas plants that grow in the autumn contain zinc and manganese in high amounts. Plants' ability to accumulate mineral substances within their bodies is determined by the plant's development period, nutrient content, root structure, the structure of the soil in the area where the plant grows and its mineral matter content, and the amount and distribution of precipitation during the vegetation period. (Mandal, 1997;Chetri et al., 1999;Abdullah et al., 2013;Temel and Surmen, 2018;Temel, 2019). As a result, it is critical to understand the mineral properties of the plants used as feed sources, as well as their quality features, in order to determine the amount and content of mineral substances that animals would take from the feeds (Keskin et al., 2016). In addition to these characteristics, plant shoots are favoured as an alternative feed source in animal nutrition due to their high nutritional content (Oktay and Temel, 2015a), and their shoots are heavily chewed by animals . Due to its ability to maintain shoot development and greenness during vegetation, it is an important feed source for small ruminants that are grazed, particularly during the summer and autumn seasons (Oktay and Temel, 2015b). To the best of our knowledge, there have been no investigations on determining the macro and micro mineral composition of phog during its active development stage. There has only been one study on determining the mineral content of plants in the spring (February) season (Abdullah et al., 2013). Grazing animals must consume enough forages to meet their mineral requirements. Factors that reduce forage intake (for example, low protein and excessive lignification) also affect overall mineral consumption. Mineral concentrations in plants are affected by factors such as soil type, plant species, maturity stage, dry matter production, grazing management, and environmental climate (Khan et al., 2005;McDowell et al., 1983). In addition, the level of minerals of feedstuff and their biological availability is very important (Dost, 1997(Dost, , 2001Dost et al., 1990). Minerals are required for life to meet the demands of development and production, as well as to replace quantities lost during regular metabolism. Minerals participate in a variety of biological activities as enzyme components and have structural and osmotic roles in a variety of animal tissues (Masters and White, 1996). They added that about 19 mineral elements are essential for animals and other may be essential but the evidence is inconclusive. The essential elements as reported by both authors were calcium (Ca), phosphorus (P), magnesium (Mg), potassium (K), sodium (Na), sulphur (S), cobalt (Co), copper (Cu), iron (Fe), iodine (I), selenium (Se), zinc (Zn), molybdenum (Mo), vanadium (Va), boron (B), lithium (Li), lead (Pb), cadmium (Cd) and Tin (Sn). Dierenfeld et al. (1995) found that the concentration of mineral elements on dry matter basis differed significantly among the 26 plant species they studied. They found that calcium content was 0.55-4.29%, potassium 0.28-1.71%, magnesium 0.12-0.65%, sodium 0.001-0.074%, phosphorus 0.06-0.19%, and zinc 2.5-6.4 Âmg / gram, which were lower or within the dietary level required for feeding dairy animals. However, copper (3.0-12.2 lg / g), iron (2.5-29 lg / g) and manganese (10.9-269 lg / g), which were sufficient for feeding most animals. Miller (1996) showed that zinc, copper, selenium; manganese and potassium contents of plant species within>100 sites in saline lands are much less, than the dietary needs of cows that were estimated in these areas. They also found that the same plants con-tents of sulfur, magnesium, exceeded the dietary needs of these cows. Al-Zaid et al. (2004) claimed that Panicum turgidum. Forssk. contents of sodium, potassium and magnesium was within the requirements of sheep, goats and camels, while its content of phosphorus and micro elements (zinc, copper, manganese) were low. For sustainability and reducing dependence on importing fodder from abroad or cultivating varieties that consume water with limited resources. It is necessary to search and find natural resources such as wild plants that exist in the desert environment of the Kingdom of Saudi Arabia that can live in harsh conditions from high temperatures, lack and quality of water (Alzarah, 2020). There is a wide gap between the nutritional needs of animal feed and the quantities produced in Saudi Arabia. Therefore, large quantities of feed are imported to reduce the gap between production and consumption. So this study aims to follow the monthly changes of minerals in different parts of Calligonum polygonoides L. ssp. comosum (L'Hér.) plant to determine what is aerial plant parts has optimum levels of the minerals which meet the requirements of grazing ruminan in wild desert. Studied area The vegetative aerial parts of Calligonum comosum wild plant which grown in different sits at Research and Tanning Station of king Faisal University, which is protected since 1975. The study area located in the Eastern Province of Saudi Arabia, between 25°-16 0 14 00 and 25°15 0 25 00 N and 49°43 0 20 00 and 49°41 0 42 00 E with the area of 6 km 2 (600 ha). The raise of the area is 150 m above sea level. The studied area is located in tropical zone with a hot desert climate. The average annual rainfall in the region is 72 mm and annual evapotranspiration potential is 3600 mm. The average annual temperature is 21.46°C with range between 2.20 and 42.37°C. The soil order is arid soil. Preparation plant sample for measuring of elements The plants were collected monthly from Jnuary to April/2020G and divided into the vegetative (leaves and Stems and the whole plant as aerial part) and grouped according to each month and part. The plant samples were cleaned from dust with brush then washed with 0.1 M HCl and following the samples were washed three times by using deionized water. After that, the samples were air dried for 48 h. The samples were dried for two days at 65°C in dried oven and then grinded and sieved in mesh sieve No. 60, after which the samples were stored in plastic bags until the content of the elements were estimated. Estimation elements content of plant 0.5 g of dried plant sample has been digested in 50 ml volumetric flask with 2.5 ml of concentrated sulfuric acid (H 2 SO 4, 95-97%) on hotplate at approximately 270°C. Repeated additions of H 2 O 2 were added until the digest become clear (Cottenie, 1980). After digestion, deionized water was added to the final volume of 50 ml in a volumetric flask. Elements were determined in the liquid sample by using Atomic absorption and emission spectrometer model Shimadzu-AA7000. While the total nitrogen in the samples was determined according to Cottenie (1980) by using micro kjeldahl method and phosphor. The obtained data were analyzed using SAS computer program. Means were differentiated by using LSD test as described by Snedecor and Cochran (1989) was analysed as per the Khan et al. (2019) studies. Analysis of variance of the minerals content Minerals (macro elements and microelements) content was estimated monthly in the aerial parts of the plant (whole plant, leaves, and stems). Analysis of variance is shown in Tables 1-5. Results illustrated that significant monthly changes were found in the recording to ash, N, P, K, Ca, Mg, Mn, Zn, and Cu. Whereas significant variations of aerial parts were found in their contents of ash, N, P, K, Ca, Mg and Cu. However, the interaction between months and different parts was significant only on ash%, C, P, K, and Cu. Nitrogen (N g kg À1 ) Nitrogen is the source of protein for many organisms such as plants and plays a good role in determining the quality of feed as food for animals. The analytical results of nitrogen g kg À1 are presented in Table 1. As a result of the total N through the studied months, it appeared that the N content did not vary significantly in which the content ranged from 14.21 g kg À1 in April and 16.74 g kg À1 in March. The results of means comparison of N were obtained highly value in whole plant (17.30 g kg À1 ) and lower content in stem (12.0 g kg À1 ) while the content was 16.4 g kg À1 in leaves without no significant variation between the whole plant and leaves. The results in (Fig. 1) presented that the highly nitrogen content was 18.4 g kg À1 in whole plant through February and lower value was in stem at April. Phosphorus (P, g kg À1 ) The relevance of phosphorus (P) to plants, grazing animals, and humans is undeniable. Phosphorus (P) is essential for plant growth and can be found in all living plant cells. It is involved in various critical plant operations, such as energy transfer, photosynthesis, sugar and starch processing, nutrient flow within the plant, and the transmission of genetic traits from one generation to the next. Table 1 shows the phosphorus content changes in different aerial parts through studied months. By way of a result of the phosphorus the studied months, it appeared that the P content vary significantly in which the content ranged from 3.43 g kg À1 in February and 2.97 g kg À1 in April without any significant through first three months from study. By looking at the phosphorous changes in the plant parts, the results recorded that the phosphorous content of the whole plant (4.01) > leaves (3.13) > stem (2.70 g kg À1 ) with significant variations in the three aerial parts of the plant. Significant differences were found in phosphors content in plant under the effect of months and aerial parts (Fig. 1). The highest content of phosphorus (4.17 g kg À1 ) was found in leaves at January while the lowest content (2.70 g kg À1 ) was in stem at April. Potassium (K, g kg À1 ) Generally, potassium content in this plant ranged between 13.73 in stem of April and 16.60 g kg À1 in whole aerial plant at February with significant differences between the values under the effect of months and aerial plant. The amount of Ca in the Calligonum comosum wiled plant in four months from January to April is given in Table 5. It has been determined that there are significant changes in Ca mineral contents of the plant according to months and plant parts while the change through the months in different parts of the plant is not significant (Fig. 1). Calcium content of the plant changed by 26.29 in April to 28.0 g kg À1 in February. According the changes through months the Ca values increased from January to February and then decreased. 3.6. Magnesium (Mg, g kg À1 ) The Changes of Mg content in through 4 months from January to April in different parts of Calligonum comosum wild plant are publicized in Table 1. According the results the highest value of Mg was found in January (11.62 g kg À1 ) and lower content was found in April (10.62 g kg À1 ) with significant variation between them while no significant differences were recorded between the mean of January, February and March. The results of Table 1 The change of ash%, N, P, K, Ca and Mg g kg À1 in plant parts (whole plant, leaves, and stem) of Calligonum comosum through Months (January, February, March and April of 2020G). Treatments Ash% Months ( Means in the same column followed by different letters are significantly different at p < 0.05. *,**,***,**** indicates significance at the 0.05, 0.01,0.001 and 0.0001 levels, and NS means insignificant at level p < 0.05. LSD 0.05 least significant difference at 0.05 level of significance. Probability < 0.05, 10 -2 , 10 -3 and 10 -4 means the probability of signification, CV means coefficient of variation. oLSD0.05: The least significant differences at p < 0.05. CV is coefficient of variance of the species examined is in the defined range and at the level of meeting Zn requirements of the animals recommended by NRC. $ The same letters on the columns of months treatment show insignificant differences at p < 0.05. $$ The same letters on the row of plant parts treatment show insignificant differences at p < 0.05. Table 1 revealed significant differences between aerial parts of the plant under study (whole plant, leaves and stem) with regard Mg content. The high Mg concentration (11.84 g kg À1 ) was found in leaves but the low values (11.16 g kg À1 ) was noted in stem. Concerning changes of Mg content through months and aerial parts (Fig. 1), the highest value (12.30 g kg À1 ) were obtained in leaves through January and March, while the stem produced the least value (10.40 g kg À1 ) in April. Iron (Fe, mg kg À1 ) The Fe amounts in different parts of Calligonum comosum through four months from January to April are presented in Table 2. Statistical analysis revealed that the iron content showed insignificant differences as to months, aerial plant part and the average iron content through the months ranged between 1552 mg kg À1 in January and 1617 mg kg À1 in April. The content of Fe follows that order whole plant (162.5 mg kg À1 ) > leaves (1586 mg kg À1 ) >stem (158.7 mg kg À1 ). The higher content was found in whole plant (167.7 mg kg À1 ) at February while the lowest value was in stem (150.0 mg kg À1 ) through April. Manganese (Mn, mg kg À1 ) The manganese content of Calligonum comosum did not change significantly according to plant parts and parts through four months while the average contents through months were varied significantly ( Table 3). The average manganese contents during the 4-months January, February, February and April were determined as 37.18 mg kg À1 , 37.1 mg kg À1 , 41.52 mg kg À1 and 47.78 mg kg À1 , respectively (Table 3) without any significant varied between the content of January, February and March. The Mn concentration in leaves (41.81 mg kg À1 ) was higher than of both stem (41.16 mg kg-1) and whole plant (40.95 mg kg-1). Also, Table 3 revealed that the lower levels of Mn was 35.19 mg kg-1 in whole plant at January while the higher content was 49.29 mg kg-1 in leaves at April without any significant between plant parts and months. Zinc (Zn, mg kg À1 ) According the represented results in Table 4 showed that the highest Zn contents (14.90 mg kg À1 ) were obtained from Calligonum comosum in March and lowest values (9.71 mg kg À1 ) was recorded in April, and significance difference was found between two months in terms of Zn content (Table 4). In addition, the studies conducted have revealed that there are no significant variations in Zn contents among the different plant parts of Calligonum comosum. The content of Zn fallows this order stem (13.03 mg kg À1 ) > leaves (13.01 mg kg À1 ) > whole plant (10.86 mg kg À1 ). Cupper (Cu, mg kg À1 ) Cu content of Calligonum comosum changed significantly according to plant parts and through four months (Table 5). The average Cu contents during the 4-months January, February, February and April were determined as 61.82 mg kg À1 , 64.38 mg kg À1 , 46.71 mg kg À1 and 21.09 mg kg À1 , respectively (Table 3) without any significant varied between the content of January, and February. Discussion These results are in the context of the findings of Bidgoli (2018) in their study on the forage quality of Calligonum comosum in three phonological growth stages (vegetative, flowering and seedling). While Khan et al. (2005) showed that the contents of minerals of leaves in different forage and grasses as related to requirements of ruminants are differed from type to type and through growth period. The ash content is a good indicator for total minerals in feedstuffs. The values of ash% according different parts and months and their interactions are presented in Table 1. The obtained results revealed that the ash% was significantly affected by recorded month and plant parts and their interaction. The ash% varied from 7.22 to 14.31%. These values are in the range which found by Khan et al. (2005) where they indicated that the forage normally contain 3% to 12% ash on a dry matter basis. Concerning month effect the highest value (14.31%) was obtained at February while the lower value (7.22%) was at month of March. These results n agreement with Alzarah (2020) who found that the high ash% value was produced (15.47%) in February, whereas the least ash value% (6.81%) produced in May. Regarding plant parts, the data of Table 1 revealed that the highly content of ash (11.98%) in leaves while the lower value was recorded in stem (8.54%). That means the minerals are moved from roots to stem and accumulated in leaves and aerial whole plant rather than stem. Significant difference was found in the effect of interaction between months and aerial plant parts on ash% (Table 1&Fig. 1). The highest ash% (16.63%) was found in the aerial whole plant in January. The least result (4.40%) was estimated in stem at April. The N content increased as the months progressed from January to March, and then began to decline in April. This result confirms what Batooli, (2011) referred to his study as he showed the quality of feed varies from the period growth of the plant from March to April. The N content increased as the months progressed from January to March, and then began to decline in April. This result confirms what Batooli, (2011) referred to his study as he showed the quality of feed varies from the period growth of the plant from March to April. The phosphorous results confirmed the gotten of Dongmei et al. (2005). The phosphorus content of Calligonum comosum is above the ranges of different natural growing plant species that the varied between 1.3 and 4.4 g kg À1 (NRC, 2001). Alzarah (2020) in his study on determination on mineral elements of halophytic plant species in Eastern province of Kingdom of Saudi Arabia indicated that the Calligonum plant has 3.4 ± 0.5 g kg À1 . That means the leaves has higher content in January. This percentage is higher in most plant parts of Calligonum comosum except in stem in March than alfalfa (2.8 g kg À1 ) according to NRC (2016a). The absence of K leads to the weakness in the bone of animals and a decline in their growth and development as indicated by NRC (2001), McDonald et al. (2011) and Kutlu et al. (2005). Fig. 1 shows the changes of potassium content in different months and also in different aerial parts of Calligonum comosum wild plant. It was higher than the content of studied plant. But it was higher than the minimum requirement for cattle (6 g kg -1 ), sheep (5-8 g kg À1 ) and milking cows (10-10.5 g kg À1 ) but lower than the highest level that could be used for dry cattle (30 g kg À1 ) (NRC, 2001(NRC, , 2016b. The results of Table 1 revealed significant differences between the mean of K content through months (from January to April) where the low content (14.86 g kg À1 ) was in April and higher value reached to 17.01 g kg À1 in March. This result is in agreement with that obtained by Temel (2019) also by Al-Jaloud and Hussain (2006) in their study on Sabkha ecosystem and halophyte plant communities in Saudi Arabia they indicated that the K content was 4-28.8 g kg À1 in eastern province and 0.5-2.6 g kg -1 while in Western ranged between 0.5 and 2.6 g kg -1 . In addition, Alzarah (2020) indicated that the K content of Calligonum comosum wild plant from Eastern province was 16.0 ± 3.9 g kg À1 . The levels of K in different parts of this plant as shown in Table 4 were follow that order: leaves (17.55 g kg À1 ) > whole plant (16.01 g kg À1 ) > stem (14.87 g kg À1 ). The above results summarized that the higher values of K were in leaves grown in March. As whole the level of K in Calligonum comosum was lower than alfalfa (35 g kg À1 ). The comparison of K content in Calligonum comosum with need requirement the results indicated that it was higher than the minimum requirement the for cattle (6%) (NRC, 2016a) and sheep (5-8%) (NRC, 2016b) and milking cows (10-10.5%) but lower than the highest level that could be used for dry cattle (30%) (NRC, 2016a). These results confirm in Calcium by Mandal (1997) , Chetri et al., (1999); Abdullah et al., (2013); Temel and Surmen (2018) and Temel, (2019) reached, as they found that the content of the elements varies from month to month, vegetation period and according to the seasons of the year. Our results are consistent with Temel (2019) indicated that the mean calcium content of Calligonum comosum in through seven months from April to October of his study varied from 13.2 to 18.2 g kg À1 . Concern to calcium changes in the aerial parts of the plant specified that leaves (28.05 g kg À1 ) > All plant (27.82 g kg À1 ) > stem (25.06 g kg À1 . The higher values were 28.09 g kg À1 at February in leaves and the lower value was in stem of April (Fig. 1). These results are in agreement with that obtained by Alzarah (2020) in their study on the Ca content of Calligonum comosum where they found that the Ca contents were 26.8 ± 19.2 g kg À1 . Whereas he reported that these levels were higher than that required by beef cattle (0.18-0.44%), milk cows (0.60-0.65%), sheep (0.25-0.84%) and goats (0.138%) (NRC, 2016b and2001). However, according to Wardeh (1997), there was no problem for calcium deficiency in camels grazing in the natural pasture. While, Weiss (2008) indicated that the maximum tolerable levels (MTL) of Ca is 1.5% of dietary DM (approximately 2 times the NRC requirement for dairy cows). Al Noaim et al. (1991) in their study of chemical composition of range plants in the Eastern Province of Saudi Arabia showed that the concentration of Ca varied from 8.4 to 23.6 g kg À1 . The results of Table 1 and Fig. 1 summarized that the Ca of Calligonum comosum approached from the needing of wild animals according to NCR (2016b) and the higher values were found in leaves through four months of the study. Magnesium is present in the animal body at a concentration of 0. 4 g kg À1 (Kutlu et al., 2005). Magnesium is closely related to calcium and phosphorus. The skeletal structure contains 70% of the magnesium in the animal body, with the remaining amount found in soft tissues and liquids. Magnesium and phosphate are essential for transferase enzyme activity, glucose and lipid metabolism, and cell respiration. A deficiency of magnesium in the animal diet can cause irritability, myotonic and nerve overextension, deficits in glucose and lipid metabolism, weakness in skeletal and hoof development, and a decline in animal growth and development. (NRC, 2001;McDonald et al., 2011;Kutlu et al., 2005). According to NCR (2016b) the level of Mg in Calligonum comosum was higher than in alfalfa (3-10 g kg À1 ) and also higher than that needed for milking cows (1.8-2.1%) (NRC, 2001(NRC, , 2016a, sheep (1.2-1.8%) (NRC 2016a), goats (4% À8%) (NRC, 1985(NRC, , 2016a and horses (8 g kg À1 )(NCR2001). Alzarah (2020) determined the Mg content in Calligonum comosum with 11.8 ± 1.5 g kg -1 and this value was in range of our study (Table 1). From these results, we conclude that eating any part of the plant during the study months meets the nutritional needs of wild animals. The Fe results are in line with result Alzarah (2020) which indicated that Fe content of Calligonum comosum from Eastern province of Saudi Arabia was 211.66 ± 45.39 mg kg À1 while Temel (2019) discovered that the Calligonum comosum plant's shoots contained 99.73 mg kg À1 to 190.43 mg kg À1 throughout a 7-month period. The maximum iron level (190.43 ppm) was found in April, while the lowest iron content (99.73 ppm) was found in October, with iron content decreasing as the development period advanced ( Table 3). The Fe content of Calligonum comosum was higher than that of alfalfa (189 mg kg À1 ) (NCR, 1985). They are also higher than the levels required for sheep feeding (30-50 ppm) (NRC, 1985), dry cows (50 ppm) (NRC, 2016b), milking cows (1.8-12.8 ppm) (NRC, 2001) and the proposed level for feeding large and small camels (50-100 ppm, respectively) (Wardeh, 1997). The iron level is also lower than the level of 1000 ppm that is needed by cows (NRC, 2001) and camels (Wardeh, 1997) and the level of 500 ppm needed for sheep (NRC, 1985). The levels of Mn concentration were lower the value which recorded by Alzarah (2020) in Calligonum comosum which grown in Eastern province in Saudi Arabia was 65.28 ± 35.88 mg kg À1 . The difference between Alzarah values and our result may due to location, soil and timing of sampling. These recorded levels in natural plant were generally higher than those of alfalfa (30.3 ppm) (NRC 2016b). They were also higher than the minimum requirements for sheep (20-40 ppm) and dry cows (20 ppm) (NRC, 2016b) and milking cows (14 ppm) (NRC, 2001). Wardeh (1997) suggested a mean level of 40 ppm as a minimum requirement for camels, but most animals could tolerate very high concentrations of manganese up to 1000 mg kg À1 for cattle, sheep and camels (Wardeh, 1997). NRC (1985) reported that toxic level of this element is 1000 mg kg À1 . According to these results, Mn content of Calligonum comosum was found to be lower than these critical values, and it has been demonstrated to be sufficient for ruminants. The deficiency of Mn leads to abnormal development and bone disorder in animals (Ayan et al., 2006). Zinc element takes over an important duty in the activation of enzymes. In addition, the deficiency of Zn may lead to anemia, negative effects on immune system and infertility (Hidiroglou and Knipfel, 1984). Alzarah (2020) indicated the value of Zn was 7.29 ± 2.19 mg kg À1 in Calligonum comosum. The recorded data in Table 9** revealed that the Zn concentration of studied plant was less than that of alfalfa (18.6 mg kg À1 )(NCR, 2016b) and lower than the minimum requirement for sheep (20-30 mg kg À1 ) and milking cows (43-55 ppm) (NRC, 2016b) and camels (40 ppm) (Wardeh, 1997). However, some plant species could provide the minimum dietary requirement for cows (30 ppm) (NRC, 2016b). This content was also far from the maximum permissible limit that could cause toxicity to the animal, which is 500 and 700 ppm for cattle and sheep respectively (NRC, 2016a). Therefore, this study showed that the Zn content of the species examined is in the defined range and at the level of meeting Zn requirements of the animals recommended by NRC. The Cu concentration in stem (54.63 mg kg À1 ) was higher than of both leaves (47.69 mg kg À1 ) and whole plant (43.19 mg kg À1 ). Also Table 5 revealed that the lower levels of Cu was 16.78 mg kg À1 in leaves at April while the higher content was 73.46 mg kg À1 in stem at January with significant between plant parts through the months. The levels of Cu concentration was lower than the value which recorded by Alzarah (2020) in Calligonum comosum which grown in Eastern province in Saudi Arabia was 54.97 ± 4.18 mg kg À1 . The difference between Alzarah values and our result may due to location, soil and timing of sampling. These recorded levels in Calligonum comosum plant were generally higher than those of alfalfa (5.3 mg kg À1 ) (NRC 2016b) and are significantly higher than the minimum requirements for sheep (5-11 ppm) (NRC, 1985), cattle (10 ppm) (NRC, 2016a) and milking cows (11 ppm) (NRC, 2016b). However, Wardeh (1997) stated that it is difficult to determine the camels minimum copper requirements because the absorption copper rate depends mainly on its interaction with molybdenum, sulphur and possibly some other elements. In addition, Underwood and Suttle (1999) noted that copper absorption is influenced by climatic factors as well. However, these levels were very high than the toxicity level for sheep (25 ppm) (NRC, 1985) but were lower than the reported toxicity for dry cows and milking cows (100 and 80 ppm) (NRC, 2016a and 2001), respectively. Conclusion The macro and micro-minerals were varied significantly according to aerial parts of Calligonum comosum plant and as to depend months. The nitrogen, phosphorus and potassium content changed significantly according to aerial parts of Calligonum comosum wild plant where a whole plant > leaves > roots, while the concentration of calcium, magnesium, manganese, zinc and copper elements in the leaves was significantly higher than other parts (whole plant > roots). The plant content of nitrogen, potassium and zinc found to be high in March. The measured concentration of phosphorus, calcium, iron and copper are relatively higher during February when the temperature begin to be high, while the concentration of magnesium, manganese and copper was the highest in January and April respectively. It is observed that the plant can meet with the daily N, P, K, Ca, Mg, Fe, Zn, Cu and Mn requirements of small ruminant throughout its 4-month growth period of this work.
v3-fos-license
2019-02-17T14:20:14.346Z
2018-04-20T00:00:00.000
67401087
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.ijarcs.info/index.php/Ijarcs/article/download/5681/4702", "pdf_hash": "6147db3b9fdcdaba733b92f8df22ccc96f8a7a1f", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41703", "s2fieldsofstudy": [ "Computer Science", "Agricultural and Food Sciences" ], "sha1": "694cdbf64569443383befefddb2437f0aa3f4de2", "year": 2018 }
pes2o/s2orc
IMAGE CLASSIFICATION OF AGRICULTURAL DATA USING SUPERVISED LEARNING TECHNIQUES:-A SURVEY : Nowadays with the increasing progress in the field of machine learning and image processing, image classification plays a vital role and provides various advantages like to classify the different varieties of wheat/rice seeds or any other agricultural seeds/grains. The literature basically includes the algorithm for feature extraction and dimensionality reduction algorithms in order to reduce error. Accuracy totally depends on a number of the ratio of samples which are divided for training and testing phase. This survey includes an artificial neural network with back-propagation having multiple hidden layer and support vector machine, these supervised learning models are taken into consideration in order to classify the types, by which through the accuracy could be known. Various issues and challenges are highlighted in this survey. It is evident from the literature that back-propagation with multiple hidden layers can reduce more error in comparison to single layer hidden layer. INTRODUCTION Electric On digital images for performing image processing that uses the computer algorithms it is called to be Digital Image processing. Digital image processing has been taken into consideration. More noise and signal problems can be avoided by using various algorithms. Digital image processing is the main viable innovation for Classification, Feature extraction, Pattern recognition, Projection, Multiscale signal analysis. According to the research and experiments, reduction of dimension can be done using feature extraction algorithms like PCA-SIFT [9]. From an initial set of measured data, features are extracted that are quite informative and non redundant by using generalization steps. Lots of memory and calculation control are required for examining most of the part. Preparation of tests are needed for grouping the calculation overfit and new specimens are summed up inadequately. Techniques in order to develop blends of factors for overcoming the issues, feature extraction termed to be used and used for portraying important information with satisfactory precision. Various researchers have done research using different supervised models for classification. Datasets of seeds taken in which some samples taken for training and testing phase. Accuracy totally depends on the no. of the ratio of samples which are divided for training and testing phase and which model to be selected that also totally depends on the dataset. Problem occurs by doing image resizing like noise distortion, blurring of image and signal problems. Section 2 describes the related work on image classification using supervised learning techniques. Supervised learning techniques like ANN and SVM are described in Section 3. Section 4 gives the brief information about image processing steps to overcome issues which troubles the researcher nowadays and feature extraction algorithms are described. Section 5 contains the conclusion of this survey paper. RELATED WORK Literature survey have done by keeping in mind the open issues facing researchers nowadays. Classification is done using various types of techniques. Image classification has been done by taking out various datasets of rice seed varieties [11] or any other seeds of crops like almonds [7], maize and wheat. H. Saad and A. Hussain [5] proposed model for checking the ripeness of papayas which are mature, over-mature or immature using the artificial neural network as well as threshold rule but accuracy shows more while using neural networks rather than threshold rule. Neural network toolbox of some MATLAB version to be used in order to classify it. Various researchers have done research using different supervised models for classification. Datasets of seeds taken in which some samples were taken for training and testing phase. Accuracy totally depends on the no. of the ratio of samples which are divided for training and testing phase. Which model to be selected that also totally depends on dataset. i.Kavdir [3] proposed approach for discrimination of Helianthus, weed, and soil have done victimization by artificial neural network (ANN) with back-propagation (BPNNs) containing multiple hidden layers provides improved recognition compared to at least one hidden stratified BPNN topologies [3]. Crop Classification has additionally been done victimization mathematical logic [4] and neural network. Classification for the maturity of papayas is done victimization ANN and threshold rule. Use by ANN is more practical than threshold rule even if time interval have taken a lot of by ANN. W. Tan, L. Sun, D. Zhang, D. Ye, and W. Che [2] proposed approach in order to improve results of Support vector machines that used for characterizing wheat grains in several quality classes done by close to infrared spectrographic analysis and support vector machine. Results obtained from SVM were higher from NIR Technology. D. Halac, E. Sokic, and E. Turajlic [7] researchers have proposed amazing model for almond classification in which they have used support vector machine supervised model instead of the neural network. SVMs gives more accuracy in order to classify parted, broken almond or another hazelnut/almonds. S. Gupta and S. G. Mazumdar [6] done great research on image edge detection as one of the steps in digital image processing in which Sobel operator has been used by which focused object can be detected. They have proposed sobel edge detection algorithm is useful for detecting edges. M. R. Golzarian and R. A. Frick [8] have proposed approach for dimensionality reduction called Principal component analysis in order to classify images of wheat, ryegrass and brome grass. Image processing includes conversion of true image to gray scale image. Gray scale image is then converted into binary image. After that, color segmented image is to be further used for feature extraction process. SURVEY ON SUPERVISED LEARNING TECHNIQUES The task of inferring operation in the machine learning from labeled training knowledge is called Supervised learning. A group of training examples is to be included in training knowledge. Examples consisting of a vector that is input object and supervisory/higher signals that are called desired output object that whole components come in supervised learning. For performing mapping of new examples, production of an inferred output that gets from analyzing the training input, it is to be ruled out by supervised learning. An optimum state of affairs can afford the rule to properly confirm the category labels for unseen instances. this needs the learning| the coaching the educational rule to generalize from the training knowledge to unseen things in a very "reasonable" approach. A. Artificial neural network Set of connected units are predicated by ANN are called as artificial neurons (basically included in the animal brain as analogous it to biological neurons). ANN model is shown in figure 1. Transmission of proof to different neuron by every association or synapse between neuron cells. [10] Signal downstream neurons which are connected to it after processing signal by receiving neuron that is post-synaptic neuron. Calculation of output of each neuron is done by the nonlinear function of the sum of its output where a synapse is a real number. Variations of neurons and synapses occur as learning rule moves forward, by which strength of signal increases/decreases while sending downstream. [1] Aggregate signal is lower or above then only it contains threshold value and that sent by the downstream signal. Artificial neural network having the flexibility to find out and model non linear and complicated relationships, that is basically necessary. It will infer unseen relationships on unseen information moreover, therefore creating the model generalize and predict on unseen information. B. Support vector machine approach Supervised Learning models includes learning algorithms which are useful in classification and regression analysis by which we get by analyzing data. A model built up by SVM training algorithm which is useful in assigning new examples to anyone classification or regression category. By assigning, it just makes a non probabilistic binary linear classifier. SVM model is shown in Figure 2. Mainly hyper plane is used in order to achieve good separation and the hyper plane should have a maximum distance to the selected training data point and that selected training data point is called as functional margin [14,15].So we can say that, as the margin is larger, that is directly proportional which lower the generalization error of that particular classifier which is selected. The advantage of SVMs includes square measure useful in text and machine readable text categorization as their application will considerably scale back the requirement [12].By using SVMs, one can perform classification of digital images. Higher search accuracy can be gained by using SVMs rather than using ancient schemes which use 3 to 4 rounds of connection feedback. DIGITAL IMAGE PROCESSING Image Processing is the thought to be basic part as though the picture resizing and feature extraction. If it is done in the inappropriate process, the order will end up noticeably mistaken. Two fundamental operations is constrained in image processing in order to get information for an ANN classifier. Accelerating learning procedure is the first operation and edge slicing is second in order to increase the level of pixels in images which are contrasted with pixels of soil. Image Resizing Resizing a unique picture in image processing is the initial phase into a certain size which is perfect for programming which we are using. New image with the higher or lower no.of pixels was to be generated, when scaling or resizing of the digital image [13]. Edge cutting Conversion of the advanced image into numerous portions called a set of pixels or superpixels. The fundamental objective of edge cutting to focus on the important object rather focusing on a less demanding object. For finding out articles and limits like lines and curves of images and many more properties, image segmentation is used. Pixels having the same name can share some of the attributes if image segmentation allots mark to every pixel in a particular image up to that extent. Algorithm for Image segmentation Step 1: Sample image (Input) had been taken. Step 2: For noise reduction, median filtering technique has been applied to the image. Step 3: Find edges using Sobel edge detection algorithm. Step 4: Edges of the image detected (Output image). The feature extraction algorithm has been described in next section which includes overall steps for performing feature extraction method. Feature extraction algorithm Principal component analysis Scale-invariant feature transformation (PCA-SIFT) feature extraction algorithm is proposed by Y. Ke and R. Sukthankar [9] for extracting local image descriptors and for extracting features. Research done in order to detect the edges of image for, edge detection algorithms are useful in smoothing of image so the connected componenets can be decreased. Sobel edge detection Edges to be found out by using the Sobel method [6] thereby uses a derivative approximation. when the gradient of a particular image found the maximum, then points of edges would get in output. Images having 33 dimensions have been taken into consideration for edge detection operations, so a pair of horizontal and vertical gradient matrices have been used. Principal component analysis Scale invariant feature transform (PCA-SIFT) Scale-invariant feature transform (SIFT) algorithm is used for describing local features and for the detection process,it mainly includes which types of pixels we needed that can be highlighted. Uniform scaling, brightness change and invariant disturbances all these factors are invariant to uniform scaling. PCA-SIFT [9]considered to a variant of SIFT. For feature extraction, descriptor algorithm has been used called PCASIFT. Image inclinations towards x and the y-direction located inside support region, these vector is contained in PCA-SIFT.39*39 areas have been inspected called inclination area, in this way the vector is of measurement 3042. The measurement is decreased to 36 with PCA. Principal Component Analysis (PCA) [8] is a standard procedure for dimensionality decrease and has been connected to an expansive class of PC vision issues, counting feature determination, object recognition and face recognition. PCA-SIFT algorithm Step1: Take the initial image for processing. Step2: Scale-Space Extrema Detection method to be done. Following points describes the whole process of the algorithm: Scale-Space Extrema Detection : Values of sigma and no of Octave can be modified.The first image can be generated in the first octave that could be obtained by interpolating the original one. After obtaining an original image, DoG (difference of Gaussians) pyramid would be created. Keypoint Localisation : In order to find the extreme point, use the DoG map for the examination of each pixel. Accurate Keypoint localization Eliminate the point with low contrast or poorly localized on an edge If the edge is with low contrast or location of edge to be seen poorly, then elimination would have been done. On the image edge, it would also eliminate the points on it. After elimination, perform the computation of eigenspace of each extreme point. Orientation Assignment (Main orientations assignment) : Perform searching on a certain scale. The point which too is taken into consideration, make sure to be present in the searchable area. After that, construct weight matrix.Region pixel magnitude and region orientation would be calculated. Construct histogram on the basis of that. After construction of histogram, maximum histogram bar and higher than 80 percent maximum to be found out. Unsearchable points would be deleted and the addition of minor orien-tation points has to be done. Keypoint Descriptor : The area to be divided into 4*4.After dividing, location rotation would be prepared and then after distorting, coordinate it. Next section describes the related work of the project in which many researchers have done research in this field of computer vision using supervised learning models. OPEN ISSUES AND CHALLENGES Dealing an image classification with supervised learning is not an easy task. Due to some issues, researchers face some problems to perform the dedicated task. Following are some of the issues and challenges which needs to be addressed by researchers: • Back-propagation with multiple hidden layers gives the highest accuracy but efficiency of model gets reduced. • Model selection issues are facing researchers nowadays. • Important challenge is to do classification among various combination of seeds. • Some supervised learning model faces efficiency issues while some of giving inaccurate classification results. • How much the efficient is classification algorithm and how many epochs to be taken to gain highest accuracy rate. CONCLUSIONS In this survey paper, research on this field had done in order to summarize the current scenario of the whole topic of the problem which researchers are facing issues so this survey would be helpful for the researcher and could bring improvement in accuracy of classification result. A survey was done in order to solve issues coming in computer vision using machine learning algorithms. Keeping in mind the various issues , this paper provides the detailed survey which model to be selected and which algorithm is beneficial to that provides the highest accuracy that totally depends on the dataset. Artificial neural network with back-propagation of multiple hidden layers gives the highest accuracy in comparison to threshold rule and support vector machine as weel as from single layer hidden layer of backpropagation, this is possible only if there are a large number of datasets. While support vector machines(SVM) gives the best accuracy if datasets are small. Principal component analysis model is used for dimensionality reduction of images that overcome the issue of blur-ring of images. To obtain the highest accuracy of classification that will be a challenging task.
v3-fos-license
2022-09-30T15:13:09.111Z
2022-09-27T00:00:00.000
252614642
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-4991/12/19/3372/pdf?version=1664355856", "pdf_hash": "95a026448a2c01c5698402cffc0cf46e5de8bce5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41704", "s2fieldsofstudy": [ "Medicine" ], "sha1": "06bca07ce0f581270a08c9b2db03847365be0a39", "year": 2022 }
pes2o/s2orc
Development of Lipidic Nanoplatform for Intra-Oral Delivery of Chlorhexidine: Characterization, Biocompatibility, and Assessment of Depth of Penetration in Extracted Human Teeth Microorganisms are the major cause for the failure of root canal treatment, due to the penetration ability within the root anatomy. However, irrigation regimens have at times failed due to the biofilm mode of bacterial growth. Liposomes are vesicular structures of the phospholipids which might help in better penetration efficiency into dentinal tubules and in increasing the antibacterial efficacy. Methods: In the present work, chlorhexidine liposomes were formulated. Liposomal chlorhexidine was characterized by size, zeta potential, and cryo-electron microscope (Cryo-EM). Twenty-one single-rooted premolars were extracted and irrigated with liposomal chlorhexidine and 2% chlorhexidine solution to evaluate the depth of penetration. In vitro cytotoxicity study was performed for liposomal chlorhexidine on the L929 mouse fibroblast cell line. Results: The average particle size of liposomes ranged from 48 ± 4.52 nm to 223 ± 3.63 nm with a polydispersity index value of <0.4. Cryo-EM microscopic images showed spherical vesicular structures. Depth of penetration of liposomal chlorhexidine was higher in the coronal, middle, and apical thirds of roots compared with plain chlorhexidine in human extracted teeth when observed under the confocal laser scanning microscope. The pure drug exhibited a cytotoxic concentration at which 50% of the cells are dead after a drug exposure (IC50) value of 12.32 ± 3.65 µg/mL and 29.04 ± 2.14 µg/mL (on L929 and 3T3 cells, respectively) and liposomal chlorhexidine exhibited an IC50 value of 37.9 ± 1.05 µg/mL and 85.24 ± 3.22 µg/mL (on L929 and 3T3 cells, respectively). Discussion: Antimicrobial analysis showed a decrease in colony counts of bacteria when treated with liposomal chlorhexidine compared with 2% chlorhexidine solution. Nano-liposomal novel chlorhexidine was less cytotoxic when treated on mouse fibroblast L929 cells and more effective as an antimicrobial agent along with higher penetration ability. Introduction Microorganisms are one of the vital factors responsible for the failure of endodontic treatment. In the development and perpetuation of pulpal and periapical diseases, bacteria play an essential role which has been demonstrated in human models and animal studies [1]. Chlorhexidine digluconate (CHX) is a potential drug molecule for endodontic infections. CHX interacts with phospholipids and lipopolysaccharides on the bacterial cell membrane Preparation of Liposomal CHX The CHX solution was lyophilized and used in the powder form. Thin-film hydration technique was used for the preparation of liposomes according to the previous work reported by our group [14,15]. Briefly, HSPC and cholesterol (90:10; 100 mg of total lipids) were dissolved in 10 mL of chloroform in a round-bottomed flask. The organic solvent was evaporated in a rotavapor under vacuum to obtain a thin film. The dried thin film was hydrated by a phosphate buffer solution of pH 7.4 containing 30 mg of CHX. After hydration, the dispersion was subjected to sonication using a probe sonicator (Model-VCX750, Sonics & Materials, Inc., Newtown, CT, USA) for 20 min at 40% amplitude (750 watt) and 6 s pulse. The dispersion was further subjected to high-speed centrifugation (at 22,000 rpm and 4 • C for 45 min) to separate the free drug. The liposomal pellet was re-dispersed in 5 mL of water and stored in a refrigerator. A similar liposomal formulation was prepared by incorporating rhodamine B dye (1% w/v solution) in the chloroform lipid solution to visualize under a confocal laser scanning microscope. The composition of different batches of liposomes is provided in Table 1. The control sample of liposomal formulation, without the drug, was also prepared. The results are presented as mean ± SD, n = 3. HSPC: Hydrogenated soy-phosphatidylcholine; mg: Milligram; mV: Millivolts. The formulated liposomal CHX was analyzed for particle size, PDI, as well as zeta potential using Zeta Sizer (NanoZS, Malvern Instruments, UK). The particle size of the liposomal formulations was determined by the dynamic light scattering (DLS) method. Liposomes were irradiated with a laser to the middle of the cell region at a fixed detection array of 90 ranges and variations in the intensities of the dispersed light were analyzed. Later, the obtained results were considered as an average of 10 measurements. In the presence of electric field, the electrophoretic mobility of the particles takes place, which is the basis of the determination of zeta potential, and is determined by laser doppler velocimetry (LDV) and phase analysis light scattering (PALS) techniques [16,17]. Determination of Encapsulation Efficiency The entrapment efficiency of CHX liposomes was determined using the previously reported method. In brief, liposomes were dissolved in absolute alcohol (5 mL) with the aid of water-bath sonication for 10 min along with Triton X. The concentration of CHX was determined spectrophotometrically in the resulting solution at 254 nm using an UV-visible spectrophotometer (UV-1700E, Shimadzu, Kyoto, Japan) in triplicate. For the estimation of CHX, 254 nm was selected as λmax, which was determined by the scanning CHX solution in UV/Vis spectrophotometer. The figures related to this experiment are provided in the Supplemented Material (I. Selection of 254 nm as λmax for the estimation of CHX). The UV spectroscopic method was optimized at 254 nm with the blank solution containing the blank liposomal solution (containing only the excipients used for preparing the liposomes). Therefore, the presence of ingredients of broken liposomes did not affect the accuracy of the estimation. Corresponding UV/Vis spectroscopic scans for plain CHX solution, blank liposomes, and CHX liposomes are provided in the Supplemented Material ( Figures S1-S3). The efficiency of encapsulation expressed as the percentage of entanglement was determined through the following relationship [18]: Surface Morphology The surface morphology of the prepared liposomal CHX was studied using the cryoelectron microscope (Gatan Alto 2500 Cryo Transfer System, Pleasanton, CA, USA). The samples of liposomal CHX were dropped on a copper grid and then analyzed under EM at various magnifications and power. The images were captured and later analyzed. Fourier-Transform Infrared Spectroscopy (FTIR) The FTIR spectra of plain CHX, physical mixture of CHX + excipients, and lyophilized CHX liposomes (Batch 3) were analyzed to assess the possible interactions. To evaluate the samples using FTIR, the samples were mixed with KBr (1:1 w/w) and the spectra were recorded in the region of 4000-400 cm −1 using FTIR8300 (Shimadzu, Kyoto, Japan). In Vitro Drug Release Study In vitro drug release from the liposomal formulation was carried out using the Electrolab TDT-08L dissolution tester (USP). Phosphate buffer (pH 6.8) was used as a dissolution medium (as oral cavity pH ranges from 6.7-7.3) for assessing the drug release. The samples of dissolution medium were collected at specific time points (1 mL of the dissolution medium) and replaced with a fresh buffer solution. The collected samples were then analyzed using UV/Visible spectrophotometer and the drug release was estimated. Preparation of the Teeth for Experimentation Premolar single-rooted mandibular teeth were collected after extraction for orthodontic reasons. They were cleaned thoroughly and stored in peroxide solution. This study measures the depth of penetration of two irrigants. Before the study, the sample size was derived by assuming that the mean difference between the depth of penetration obtained by two different irrigants may be 1 µm, and thus the "effect size" was assumed to be 1 µm. The confidence interval indicates that there might be variation of >1 µm or <1 µm in order that this variation may be 95%. Based on this assumption, a total sample size of 16 teeth was estimated by assuming the effect size of 1 µm at 95% confidence interval and 80% power. However, the minimum sample size estimated was 16 and a total of 45 teeth were collected. Approval from the Institutional Ethical Committee was obtained (Institutional Ethical Committee Approval No.: 27/2019, Kasturba Medical College and Kasturba Hospital, Manipal) for this procedure. The informed consent from the patients was not obtained as the anonymized extracted teeth were used in this study. Out of the 45 teeth collected, some of the teeth were discarded due to irregularities in the teeth. Finally, the number of teeth used for the study was 21. All the teeth were de-coronated at the cemento-enamel junction and were standardized to uniform length with the help of a diamond disc. Working length was arbitrarily 10 mm for all the selected teeth. Biomechanical preparation was performed with ProTaper rotary files up to F2 (Dentsply, Maillefer, Tulsa, OK, USA). Sodium hypochlorite was used as an irrigant to remove debris. Final irrigation was performed with ethylene diamine tetraacetic acid (EDTA) to make a smear-free layer using a Luer-lock needle for a period of 1 min. Determination of Depth of Penetration of Liposomal CHX in Dentinal Tubules Twenty-one teeth were selected for testing the depth of penetration. The following treatment was carried out for Group I and Group II: Group I: Seven teeth were irrigated with 2% w/v CHX solution (added with 1% w/v rhodamine B dye); Group II: Fourteen teeth were irrigated with liposomal CHX containing rhodamine B dye. (The rhodamine B-loaded liposomes were formulated similar to the CHX-loaded liposomes. The only difference was the addition of rhodamine B in place of CHX). All the teeth were irrigated with 5 mL of respective irrigating solutions using 5 mL disposable syringe attached with a Luer-lock needle for a period of 1 min. The needle was maintained at 1 mm short of apex into the root canal and EndoActivator (Dentsply Sirona, Charlotte, NC, USA) was used to agitate. The same irrigation procedure was repeated two times. The volume of irrigation was 10 mL. All the teeth were washed with phosphatebuffered saline after the irrigation procedure. Each sample was sectioned into three parts, each of 1 mm thickness viz., coronal third, middle third, and apical third with a low-speed diamond saw. Then, sections were viewed using the laser confocal scanning microscope (CLSM; LSM 980, Carl Zeiss, Germany) to evaluate the penetration depth of plain CHX as well as liposomal CHX. The depth of penetration of liposomal CHX and plain CHX solution into the dentin surface was detected by the fluorescence at 10× magnification. Excitation was performed at 543 nm to collect the emission emitted in the 560 nm process. The images captured were divided into four equal parts, and the depth of penetration at each section was measured. The mean values of the penetrated irrigants were recorded. Cell Viability Assay The cell viability assay of pure CHX and liposomal CHX was performed on two different types of mouse fibroblast cell lines viz., L929 and 3T3 cells. The cells (L929 and 3T3) were seeded in a 96-well plate at a density of 5 × 10 3 cells/well with DMEM supplemented with fetal bovine serum (10%; Thermo Fisher Scientific, Waltham, MA, USA) and antibiotic solution (12%; HiMedia Laboratories, Mumbai). The cells were incubated with 100 µL of test substance at different concentrations of CHX in plain CHX solution as well as liposomal CHX ranging from 7.8 to 250 µg/mL. Plain liposomes, devoid of CHX, were also tested at the same range of concentrations. The untreated cells were used as the control for comparison. The treated L929 and 3T3 cells were incubated for 24 h. After the incubation period, the supernatant was removed and 100 µL of 0.5 mg/mL MTT (HiMedia, Mumbai, India) dissolved in HBSS solution was added to the plates. The cells were further incubated for 3 h at 37 • C. The supernatant was removed and 100 µL DMSO was added. The absorbance was calculated at 570 nm using a microplate reader (BioTek Instruments, Santa Clara, CA USA). The percentage cell viability was calculated using the formula as provided below and IC 50 values were calculated by the GraphPad prism 9.0 software. Antimicrobial Study Staphylococcus aureus, Fusobacterium nucleatum (F. nucleatum), and Streptococcus mutans of clinical strains were cultured. The cells were inoculated in brain heart infusion (BHI) Nanomaterials 2022, 12, 3372 6 of 17 broth and egg yolk agar for F. nucleatum at a concentration of 10 8 cells/mL to develop a biofilm. Polystyrene 96-well plates were obtained, into which 100 µL of standard cell suspension was pipetted, and were incubated under anaerobic conditions for 72 h. Phosphate buffered saline was used to wash the biofilms and treated with a diluted series of liposomal CHX (equivalent to 2% w/v of plain CHX) and CHX solution (2% w/v) for 72 h before determining the minimal inhibitory concentration readings. Concurrently, planktonic bacterial cell suspension at 10 8 cells/mL was used to obtain the MIC readings. Statistical Analysis Statistical analysis was analyzed by two-way ANOVA with Tukey's post-hoc test (SPSS Software, version 23.0, SPSS INC. Chicago, IL, USA). Preparation and Characterization of Liposomal CHX The liposomes were successfully prepared by the thin-film hydration technique. The particle size of liposomes was analyzed using the Malvern zeta sizer. The average liposomal particle size was found to be 148 ± 4.52 to 223 ± 3.63 nm. Batch 3, which was optimized, based on size, zeta potential, and drug encapsulation efficiency, was found to possess the average particle size of 178.40 ± 4.41 nm ( Table 1). The polydispersity index value for all the batches was found below 0.4, indicating the homogeneity of the dispersion. The results of zeta potential of different batches of liposomes are shown in Table 1. The zeta potential values of all the batches prepared were found in the positive side; with 24.1 ± 2.60, 29.3 ± 2.42, 28.1 ± 2.56, 34.2 ± 2.60, and 37.8 ± 2.12 mV for Batches 1 to 5, respectively. The zeta potential of CHX liposomes was found in the positive side, owing to the fact that there may be hydrophobic interaction between chlorhexidine and HSPC along with the electrostatic interaction due to the charge interaction between the ketone group of HSPC and amine group of chlorhexidine, which might lead to the positive surface charge of liposomes [19][20][21][22]. The Cryo-EM images of optimized batch (Batch 3) of liposomes are shown in Figure 1. The vesicles were found to possess a spherical vesicular structure with a size of about 200 nm, which is inconsistent with the results obtained from Zeta Sizer analysis. Along with the almost spherical structure, the vesicles were found to be intact, discrete, and bilayered ( Figure 1a,b). Cryo-EM image of liposomes at the magnification scale of 100 nm is shown in Figure 1c, which also showed the similar features as observed in Figure 1a,b. The results of FTIR spectroscopy are shown in Figure 2. The FTIR data provide an idea of any chemical interaction between the drug and excipients. Figure 2(drug) shows the FTIR spectrum of the plain drug. The main stretching vibrations of 3300 to 3500 cm −1 for the N-H group describe the FTIR spectrum of CHX. The 2850 to 3000 cm −1 bands are the stretching vibration bands attributable to the aliphatic C-H group. There are also wavelength peaks of 1450 to 1550 cm −1 that can be assigned to the C group in the aromatic ring and ≈1238 to 1251 cm −1 that contribute to the stretching vibration frequency of the aliphatic amine group (C-N) [23] (Figure 2(drug)). These specific peaks were unaltered in the physical mixture of CHX and the excipient (Figure 2(1)) and liposomal formulation (Figure 2(2)), indicating the chemical compatibility between the drug and excipients. Nanomaterials 2022, 12, x FOR PEER REVIEW 7 of 17 The results of FTIR spectroscopy are shown in Figure 2. The FTIR data provide an idea of any chemical interaction between the drug and excipients. Figure 2(drug) shows the FTIR spectrum of the plain drug. The main stretching vibrations of 3300 to 3500 cm −1 for the N-H group describe the FTIR spectrum of CHX. The 2850 to 3000 cm −1 bands are the stretching vibration bands attributable to the aliphatic C-H group. There are also wavelength peaks of 1450 to 1550 cm −1 that can be assigned to the C group in the aromatic ring and ≈1238 to 1251 cm −1 that contribute to the stretching vibration frequency of the aliphatic amine group (C-N) [23] (Figure 2(drug)). These specific peaks were unaltered in the physical mixture of CHX and the excipient (Figure 2(1)) and liposomal formulation (Figure 2(2)), indicating the chemical compatibility between the drug and excipients. In Vitro Drug Release Study The in vitro drug release study was carried out for the plain 2% CHX solution as well as the CHX-loaded liposomal formulation ( Figure 3). The liposomal formulation showed a sustained release of drug up to 24 h, whereas the plain CHX solution showed 100% drug release at 4 h. Herein, the 24 h drug release is not relevant to the root canal of the teeth. The authors are working extensively on dental products and, thus, the authors are analyzing the prospect of using the liposomal formulation of extended release in the case that it can be used as long-acting dental inserts. The purpose of analyzing the drug release for 24 h was with respect to the hold time of drug for these products. Evaluation of Liposomal CHX with Respect to Depth of Penetration in Dentinal Tubules The results of depth of penetration of plain CHX solution and liposomal CHX into dentinal tubules are shown in Table 2 and Figure 4. Liposomal CHX showed significantly (p < 0.001) better depth of penetration in all the regions of dentinal tubules in comparison with the plain CHX solution. With liposomal CHX, the depth of penetration is significantly (p < 0.001) higher in coronal (1549.91 ± 422.56 µm) compared with middle (1115.68 ± 410.50 µm) and apical (758.34 ± 93.46 µm). The depth of penetration observed with middle and apical thirds was also significantly (p < 0.001) different from each other. When compared between the two groups, the normal CHX group exhibited a significantly (p < 0.001) lower depth of penetration compared with the liposomal CHX group in all the regions with the value of the impact of intervention as 41.6% (Table 3). The depth of penetration is with respect to the rhodamine B dye, and it does not reflect the rate of release of chlorhexidine. The depth of penetration is a property of liposomes and, thus, the results provided in Table 2 and Figure 4 complement each other. In Vitro Drug Release Study The in vitro drug release study was carried out for the plain 2% CHX solution as w as the CHX-loaded liposomal formulation (Figure 3). The liposomal formulation show a sustained release of drug up to 24 h, whereas the plain CHX solution showed 100% dr release at 4 h. Herein, the 24 h drug release is not relevant to the root canal of the tee The authors are working extensively on dental products and, thus, the authors are an Evaluation of Liposomal CHX with Respect to Depth of Penetration in Dentinal Tubules The results of depth of penetration of plain CHX solution and liposomal CHX into dentinal tubules are shown in Table 2 and Figure 4. Liposomal CHX showed significantly (p < 0.001) better depth of penetration in all the regions of dentinal tubules in comparison with the plain CHX solution. With liposomal CHX, the depth of penetration is significantly (p < 0.001) higher in coronal (1549.91 ± 422.56 µ m) compared with middle (1115.68 ± 410.50 µ m) and apical (758.34 ± 93.46 µ m). The depth of penetration observed with middle and apical thirds was also significantly (p < 0.001) different from each other. When compared between the two groups, the normal CHX group exhibited a significantly (p < 0.001) lower depth of penetration compared with the liposomal CHX group in all the regions with the value of the impact of intervention as 41.6% (Table 3). The depth of penetration is with respect to the rhodamine B dye, and it does not reflect the rate of release of chlorhexidine. The depth of penetration is a property of liposomes and, thus, the results provided in Table 2 and Figure 4 complement each other. Antimicrobial Assay The viable bacterium remaining after exposure to plain CHX solution was considerably more than observed with liposomal CHX. This indicates that liposomal CHX eliminated more bacteria compared with the plain CHX solution. Liposomal CHX decreased bacterial load after 24 h to a greater extent than the plain CHX. The colony counts of bacterium after 72 h of exposure to liposomal CHX were found to be zero (Table 4). Cell Viability Assay The cytotoxicity of plain CHX solution and liposomal CHX against L929 and 3T3 mouse fibroblast cells was determined by the MTT assay. The pure CHX solution exhibited an IC 50 value of 12.32 ± 3.65 µg/mL in L929 cell line and 29.07 ± 2.14 µg/mL in 3T3 cell line. Liposomal CHX exhibited an IC 50 value of 37.90 ± 1.05 µg/mL and 85.24 ± 3.22 µg/mL in L929 and 3T3 cell lines, respectively. The percentage of cytotoxicity was less in liposomal CHX compared with the pure CHX solution ( Table 5). The results showed approximately 3-fold decreased cytotoxicity in liposomal CHX as compared with the plain CHX in both cell lines. The obtained results were further confirmed by observing the cellular morphology under the brightfield microscope after different treatments. The brightfield microscopic images are given in Supplemented Material (II. Brightfield microscopic images; Figures S4-S7). As shown in these figures, the cells were dead even at the lower concentration of plain CHX solution; however, the cells retained their integrity after various treatments of liposomal CHX. Blank liposomes (without CHX) did not show any cytotoxicity. More than 80% of both cells were alive even at the highest tested concentration (250 µg/mL). The results are presented as mean ± SD, n = 3. CHX: Chlorhexidine digluconate; IC 50 : Half-maximal inhibitory concentration. Discussion Nanoencapsulation of CHX has been shown to increase the action of the drug in previous reports. In this study, the preparation of CHX-loaded liposomes was achieved using the thin-film hydration method. In this experiment, an attempt was made to develop liposomal CHX, which has scarcely been reported for the study of the depth of penetration of the teeth using rhodamine B dye with a confocal laser scanning microscope (CLSM). The liposomal formulation had good penetration when compared with the plain CHX. The application of nanotechnology approaches for effective delivery of CHX has been reported in a few of the previous reports. In a previous study, mesoporous silica nanoparticles of CHX were studied for antibacterial effects. Mesoporous silica nanoparticles were effectively loaded with CHX and its release from the nano-CHX was confirmed [20]. Based on the positive results derivation, it could be assumed that the CHX nanoformulation may have better penetrating efficiency in dentin, which may subsequently eradicate oral biofilms and control the bacteria. The hydrophobic components of liposomes are repelled by water molecules leading to liposome self-assembly. Additionally, phosphatidylcholine (PC) and dipalmitoyl PC can be used for liposome generation [7]. In another study, encapsulation of CHX was achieved by loading CHX inside the polymeric self-assembled tri-layered nanoparticles (TNPs). The TNPs improved the physicochemical equilibrium of nanoparticles without the use of additional surfactants in the aqueous mixture solution. These TNPs proved to efficiently encapsulate CHX for the targeted drug delivery of dentinal matrix, which was attempted to be used in disinfection of the root canal [18]. The colloidal system's stability is assessed by zeta potential. It is a measure of the repulsive forces that exist between the particles. Particles with stronger repulsive forces are less likely to combine and are stable. The surface charge value of the particles reflects the stability of the nanosuspensions, given that vesicular dispersions with higher ZP values are electrostatically stabilized nanosuspensions [23]. In the present study, liposomes display a considerably higher potential zeta value suggesting strong physical stability. In addition, a high positive charge is beneficial in the permeation of cell [24]. When the concentration of CHX was increased from 10 to 50 mg, the zeta potential values were found to increase. Particles with stronger repulsive forces are less likely to combine and are stable. Although Batch 5 with 50 mg of CHX showed the highest zeta potential value, Batch 3 was considered as an optimized batch, based on the particle size, entrapment efficiency, and zeta potential. In this study, the drug encapsulation efficiency (%EE) was found to be influenced by the total amount of CHX incorporated in the liposomes. The EE values were found between 36 ± 2.42% and 76 ± 2.80%. Batch 3 showed the highest EE% value of 76 ± 2.80%. Moreover, the percentage of EE increased considerably when the CHX concentration increased from 10 to 30 mg. However, further increase in the amount of CHX to 40 mg led to a reduction in the percentage of EE, which may be due to the vesicle leakage of CHX [25]. To check the visibility of lipidic bilayers, the vesicles were suitably focused in Cryo-EM. The lipidic bilayers of the liposomes were clearly visible in this image (Figure 1). In the FTIR spectra, the peaks of absorbance indicate different functional groups. Specific bond types and, thus, specific functional groups absorb different wavelengths of infrared radiation, as explained above in the "Results" section. Nanoencapsulation could result in a sustained drug release and this prolonged delivery period may lead to a complete elimination of bacteria in dentinal tissues [18,26]. The in vitro diffusion study demonstrated the sustained release of CHX from the liposome compared with the pure CHX. The release of CHX from liposomes was nearly 100% in 24 h; while 100% CHX release was obtained for pure drug within 4 h. The obtained results provide insight that CHX-loaded liposomes can act as a reservoir for the long-term release of drug after they were located inside dentinal tubules. Moreover, the above results infer that liposomal CHX can demonstrate a long-term anti-microbial action compared with the pure solution. Optimal ratio of lipid may help in further sustaining the release of drug leading to a decrease in usage frequency, which will be convenient to patients in comparison with the normal CHX solution. The drug release kinetics were deduced from the in vitro drug release data by subjecting the date with respect to zero-order equation, first-order equation, and Higuchi's equation [27]. For the zero-order pattern, the equation was found to be y = 3.9041x + 22.234 with R2 value of 0.7655. For the first-order release pattern, the equation was y = 0.0338x + 1.3825 with R2 value of 0.5562. However, for Higuchi's release pattern, the equation was found to be y = 22.449x with R2 value of 0.8482. Therefore, the drug release pattern from liposomes was found to be diffusion dominated, according to Higuchi's equation. The teeth were treated with 2% w/v CHX solution (added with 1% w/v rhodamine B dye) or with liposomal CHX containing rhodamine B dye. The results indicated that the depth of penetration of normal CHX solution is higher in coronal (700 ± 75 µm) compared with the middle third (623 ± 68 µm) (p > 0.001) and apical third (331 ± 39.40 µm) (p < 0.001). However, the depth of penetration was not significantly different between the coronal and middle third of the roots with respect to normal CHX. This may be due to the fact that the tubule diameter is almost similar to the coronal and middle thirds and the agitation of EndoActivator might have helped in penetrating the above areas compared with the middle third. Rhodamine B dye was mixed with both liposomal CHX as well as normal CHX irrigating solutions at 1% concentration to evaluate the depth of penetration [28]. The penetration of irrigant is essential to eradicate the microorganisms. E. faecalis can penetrate deep into dentinal tubules and co-aggregate with other organisms [29]. Dentinal tubules may be invaded by bacteria, where they are unaffected by instrumentation and irrigation. These procedures are effective only on the surface of the canal [11]. Endodontic infection caused by bacteria that have colonized the dentinal tubules might cause failure in treatment. According to Zou et al. (2010), the microorganisms can produce endotoxin that can penetrate up to 500 µm into the root dentin. The failure in endodontic treatment may be attributed to the deep penetration of bacteria into dentinal tubules and also the buffering capacity of dentin which protects the bacteria from the CHX effect [30,31]. The systems commonly used as irrigant activation techniques include sonic, ultrasonic agitation, manual activation with gutta-percha cones, and agitation with brushes [11]. In a study by Kanumuru et al. (2015), it was shown that for the effective penetration of irrigants up to a working length and into the lateral canals, this can be achieved by the irrigant activation using reciprocation movement [10]. In the previous study, the mean penetration depth values observed with 2% CHX solution in coronal, middle, and apical thirds were 138, 80, and 44 µm, respectively for the conventional syringe group; whereas the mean penetration depth values in passive ultrasonic irrigation were 209, 138, and 72 µm, respectively in coronal, middle, and apical thirds [11,28]. In another study, it was proved that the irrigant penetrated to a depth of 249.9 µm in coronal third, 163 µm in middle third, and 42 µm in apical third [30]. In contrast, the CHX solution used in the present study showed comparatively higher values for depth of penetration. All the previous reports and the results of the present study indicated high variability in the data of depth of penetration. This high variability may be attributed to several factors. Dentin variables, such as depth, type, and caries will affect the size and patency of dentinal tubules [11,32]. In addition, temperature and moisture may also influence penetration [33]. Moreover, the application of EndoActivator could have influenced the depth of penetration [34]. Herein, Cyro-EM was used to study the morphology of the liposomes. The liposomes were nearly spherical in shape. Bilayered vesicular structure, which is a prominent feature of liposome, was observed in Cryo-EM. The freeze fracture electron microscope is considered the ideal method for the characterization of liposomes [35]. Liposomal CHX was able to penetrate deep inside the tubules compared with normal CHX. This may be due to the reduced particle size. Moreover, the spherical nature of liposomes can be the main cause of the increase in penetration [36]. In another study, the diode laser-assisted irrigant activation technique showed better penetration depth in all of the three aspects of root dentin [34]. Furthermore, the presence or absence of smear layer determines the depth. In this study, EDTA was used to remove the smear layer during the biomechanical preparation of the tooth. Our study is in agreement with previous studies, where the depth of penetration is more in coronal third followed by middle and apical thirds [30]. Animal studies have proven that the liposomes may have the ability to enhance or decrease the penetration [37,38]. CLSM was used in our study as the sample preparation is simpler, produces lower artefacts, and provides accurate results [34]. According to Galler et al. (2019), sodium hypochlorite and EDTA showed median penetration depths of 700-900 µm [39]. Previous studies have shown that tubular density diminishes toward the apex, and due to this penetration ability, it is reduced. As the teeth selected were of the lower age group who opted for an orthodontic treatment, the chance of sclerosis is unlikely. EndoActivator was used here as it has shown better efficacy in irrigation than the conventional needle [39]. This might have enhanced the dentinal penetration. However, one of the problems with liposomes is physical instability, which may lead to leakage of CHX from the lipid vesicles during storage and in vivo administration. However, this can be addressed with an optimized composition of lipids and excipients. Microorganisms are the major cause for treatment failure in root canals. The bacterium has the ability to invade inside the dentinal tubules and, thus, co-aggregates with each other [32]. Antimicrobial analysis revealed the reduction in bacterial counts of Staphylococcal aureus and Streptococcus mutans. Streptococcus mutans was selected due to the fact that it has been proven to be persisting in symptomatic and asymptomatic apical periodontitis. Staphylococcus aureus is one of the commonly isolated organisms from re-treatment cases. Fusobacerium nucleatum is the organism which is generally observed in primary endodontic infections [40,41]. Liposomal CHX was effective in reducing the bacterial counts after 72 h. In this study, as liposomal CHX has penetrated deep into dentinal tubules and as it reduced the bacterial counts, this drug may be effective in eradicating bacteria within the root canals and oral cavity when used as a disinfectant. The cell viability and half-maximal inhibitory concentration (IC 50 ) of plain CHX solution and liposomal CHX were performed in L929 and 3T3 mouse fibroblasts by the MTT assay. This study was performed due to the fact that CHX is commonly used as a cavity disinfectant and, thus, may penetrate dentin and come in contact with pulpal fibroblasts. CHX as a cavity disinfectant may inhibit the MMPs and result in an increase in the life span of composite restoration [16]. In a previous study, the highest toxicity of 2% CHX was seen at a concentration of 0.016% at an interval of 72 h [40]. Toxic potency of CHX depends on the composition of exposure media. Exposure dose and duration of exposure are shown by the results of in vitro studies on cytotoxicity of CHX against human gingival cells [41]. Our data exhibited dose-dependent toxicity of both plain CHX solution and liposomal CHX. However, considerably reduced cytotoxicity is observed in the liposomal form at 24 h. Arbitrarily, since 2 mL of irrigant is used as a cavity disinfectant in tooth, the liposomal CHX may be considered due to its decreased cytotoxicity. While the liposomes of the present study show utility in cell killing simply by delivering the drug cargo to the cell interior, other drug formulations require the release of the drug directly at the subcellular site/organelle of action (e.g., nucleus, mitochondria, lysosome, endoplasmic reticulum, Golgi body). Although cell surface ligands are effective for active targeting as well as in improving the drug efficacy, this is not always enough to deliver the drugs to specific subcellular locations. This aspect has prompted further efforts to develop more sophisticated nanosystems [42,43]. The stability of these liposomes at different storage conditions is not assessed in this study. However, based on our previous studies, it can be presumed that the prepared liposomes could be stable for 6 months at 5 ± 3 • C. Nevertheless, a detailed stability study of these liposomes has to be performed to determine the shelf life. Although encouraging results have been obtained with liposomal CHX in the present study, the major limitation is that the antimicrobial efficacy of the liposomal formulation could have been tested within the root canal of the extracted tooth. However, a future perspective of this study is to carry out a detailed toxicity assessment of this formulation and then perform a clinical study to appraise the results obtained in this preliminary study. Conclusions In the present work, liposomal formulation of CHX demonstrated enhanced penetration into dentinal tubules as well as better antimicrobial activity compared with CHX alone. This provides insight into the potential of liposomal formulations and development of delivery system for dental infections. Further optimization of this system can prove to be quite beneficial in the development of dental disinfection with prolonged therapeutic activity. Owing to the recent advancement in drug delivery for root canal infection, nanotechnology proved to be a potential platform for the efficient treatment of these infections. Although CHX has widely been considered as a standard drug for endodontic applications, microorganisms are generally observed even after its use, and it has been attributed to a lack of drug penetrating efficiency into dentinal tubules. Fabrication of liposomes is one of the approaches to improve the efficacy of CHX and to overcome the drawbacks. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12193372/s1. Figure S1. UV spectrum of CHX solution in water recorded using UV/ Vis Spectrophotometer; Figure S2. UV spectrum of blank liposomes (containing only the excipients of liposomes without CHX) recorded using UV/ Vis Spectrophotometer; Figure S3. UV spectrum of CHX liposomes recorded using UV/ Vis Spectrophotometer; Figure S4. The bright-field microscopic images of L929 cells at 24 h after different treatments of plain CHX solution; Figure S5. The bright-field microscopic images of L929 cells at 24 h after different treatments of liposomal CHX; Figure S6. The bright-field microscopic images of 3T3 cells at 24 h after different treatments of plain CHX solution; Figure S7. The bright-field microscopic images of 3T3 cells at 24 h after different treatments of liposomal CHX. Informed Consent Statement: The informed consent from the patients was not obtained as the anonymized extracted teeth were used in this study. Data Availability Statement: Not applicable.
v3-fos-license
2018-04-03T03:51:56.917Z
2017-02-20T00:00:00.000
17208305
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep41020.pdf", "pdf_hash": "09f2af6b57c73d38bf403d346c4596e216ba3b8a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41705", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "sha1": "09f2af6b57c73d38bf403d346c4596e216ba3b8a", "year": 2017 }
pes2o/s2orc
Performance, kinetic, and biodegradation pathway evaluation of anaerobic fixed film fixed bed reactor in removing phthalic acid esters from wastewater Emerging and hazardous environmental pollutants like phthalic acid esters (PAEs) are one of the recent concerns worldwide. PAEs are considered to have diverse endocrine disrupting effects on human health. Industrial wastewater has been reported as an important environment with high concentrations of PAEs. In the present study, four short-chain PAEs including diallyl phthalate (DAP), diethyl phthalate (DEP), dimethyl phthalate (DMP), and phthalic acid (PA) were selected as a substrate for anaerobic fixed film fixed bed reactor (AnFFFBR). The process performances of AnFFFBR, and also its kinetic behavior, were evaluated to find the best eco-friendly phthalate from the biodegradability point of view. According to the results and kinetic coefficients, removing and mineralizing of DMP occurred at a higher rate than other phthalates. In optimum conditions 92.5, 84.41, and 80.39% of DMP, COD, and TOC were removed. DAP was found as the most bio-refractory phthalate. The second-order (Grau) model was selected as the best model for describing phthalates removal. In addition to benefits of anaerobic bioreactors mentioned above, the AnFFFBR and other earlier anaerobic attached growth processes have additional advantages over suspended growth, which are related to using immobilized cells on the carrier (biofilm). These advantages include higher resistance to environmental shock (toxic compounds, pH, and temperature), greater population variation, and a higher substrate utilization rate 19,21 . As mentioned before, due to their adverse effect and extensive application, the biodegradation of PAEs should be evaluated and compared for finding the best eco-friendly phthalate whose wastewater can be biodegraded better than others. As highlighted in the previous paragraph, short chains PAEs can be biodegraded with higher rates compared with phthalates with long chains. Therefore, the major objective of this study was to investigate and compare diethyl phthalate (DEP), dimethyl phthalate (DMP), diallyl phthalate (DAP), and phthalic acid (PA) biodegradations to find the best suitable phthalate which can be suggested for industrial applications. It should be noted that there is very little or even no information on DAP biodegradation, especially in anaerobic conditions. These four selected phthalates have molecular weights lower than 250 g/mol and can be considered as short-chains phthalates. Moreover, their solubility is significantly higher than other phthalates 17 . However, over the past decades, many environmental engineers tried to find a feasible and simple way for understanding bioreactors' behavior with different substrates. Although, a number of studies have been presented for the mentioned biotechnological issues, most of them only focused on removal efficiencies, and others merely presented the kinetic coefficients [22][23][24] . Furthermore, there is still a research gap for evaluating and interpreting all the mathematical modeling, kinetic coefficients, and biomass concentrations (or other bioreactor responses) at the same time. Thus this study has been conducted in order to investigate the relations among these three parameters and present an applicable way for interpreting, evaluating, and controlling the bioreactor performance. Scientific RepoRts | 7:41020 | DOI: 10.1038/srep41020 Metabolic intermediates which had been formed during enzymatic bio-reactions and biogas production were analyzed for all selected phthalates. Finally, interactions among various parameters and bioreactor performance, such as biofilm mass, which have not been well studied before, were examined in detail. Set-up and operation of bioreactor. A laboratory scale rectangular cube Plexiglas reactor, with length and width of 10 cm, height of 70 cm, and operating volume of 6 L, was used. The top of the bioreactor was equipped with gas collector and connected to a gas meter. The heater was placed in a bioreactor chamber to keep the temperature around 25 °C. The bioreactor was filled with high-density polyethylene carriers with 535 m 2 .m −3 and 0.95-0.98 g.cm −3 of active surface area and density, respectively, as a fixed bed for biofilm growth of microbial mass. Total available surface area in AnFFFBR was 1.6 m −2 . Synthetic wastewater was fed continuously to the bottom of the AnFFFBR from the storage tank through a dosing pump (Etatron-Italy). In order to have a COD/nitrogen/phosphorous ratio of 350/5/1, NH 4 HCO 3 and NH 4 Cl in the same portion were used for the nitrogen source, and KH 2 PO 4 was used for the phosphorous source as nutrients. Furthermore The anaerobic sludge of municipal wastewater treatment plant (MWTP) was utilized as a seed to set up the AnFFFBR. For adaptation, initially 600 mg/L of glucose was used as the sole carbon source. After 67 days, when the steady-state condition was attained (the soluble COD removal variations were below 2% at least for 5 days, and one-way ANOVA test was used to confirm that these variations were not statistically significant), the second stage was started and the phthalic acid (PA) was gradually added. At this stage, 150 mg/L of glucose was replaced with 75 mg/L of phthalic acid (PA) (influent wastewater had 75 mg/L of PA and 450 mg/L of glucose). The substrate substitution was continued with a similar rate for three further stages in sequence, until only the PA remained as the substrate (300 mg/L of PA and 0 mg/L of glucose). A similar method was applied for other selected phthalates with the order of DMP, DAP, and DEP in sequence. It should be noted that the initial acclimation sludge had been kept in parallel and in similar conditions (600 mg/L of glucose and a similar composition of nutrient and trace elements) for further AnFFFBR seeding, when the new phthalate substituted the previous one (e.g. when the PA was replaced with DMP). A similar method applied for other phthalates. In the start-up period, an additional dosing pump was used to recycle washed-out sludge from the settling tank. Analytical methods. The extent of mineralization of phthalates was monitored by total organic carbon (TOC) analyses. After filtering samples with a 0.45 μ m filter, they were analyzed by TOC-Vcsh analyzer (Shimadzu, Japan). Soluble chemical oxygen demand (in present study indicated as COD) and characteristics of biomass, including volatile solids (VS) and total solids (TS) were analyzed periodically according to the analytical methods as presented in standard methods 26 . The biofilm was removed within two steps: initially, it was removed from carriers physically; then, the ultrasonic treatment, which is described in Chu and Wang's (2011) study 27 , was used for the complete removal of the remaining biofilm. In order to quantify the dry weight of the attached biofilm, 15 pieces of carriers were taken from the AnFFFBR (new carriers were replenished), and then the detached biofilm was dried at 105 °C and subsequently weighted. For phthalates' analysis, 10 mL of the effluent sample was filtered through a glass fiber filter with a 0.7 μ m pore size, and then extracted with 2 mL of n-hexan. Finally, the extracted sample was analyzed by gas chromatograph (GC) and equipped with a capillary HP-5 column and flame ionization detector (FID). Temperature program and other conditions of GC were as follows: the initial oven temperature of GC was set at 70 °C and kept for one minute, and then was raised at a rate of 10 °C/min until the final temperature of oven reached 250 °C, and then it was held for 2 minutes. The temperatures of the detector and injector were set at 260 and 250 °C, respectively. Nitrogen was employed as a makeup and carrier gas. Naphthalene was selected as the internal standard, and phthalates concentration in samples were determined by comparing them to their calibration curves which were prepared at five points. The injection volume of sample was 2 μ L. Analytical measurement of methane and other gasses in biogas were performed according to the study by Lay et al. 28 . Detecting intermediates after metabolic reactions were performed through gas chromatograph (GC) and liquid chromatography (LC) which were equipped with single and two mass spectrometer (MS) detector(s), respectively (GC-MS and LC-MS/MS). After 1 g of freeze-dried sample was ground and extracted by 10 mL of n-hexan solution, the same method of measurement of phthalates in effluent was used for quantifying phthalates' concentration in sludge. Mathematical kinetics modeling. Mathematical models and critical kinetic parameters (e.g. sludge yield coefficient and overall reaction rate) are important variables for predicting bioreactors performance and designing biological wastewater plants. First order, second order, and Stover-Kincannon, which are three common mathematical substrate utilization models, are applied in the present study to predict the AnFFFBR performance and evaluate the biodegradability of selected phthalates. When the steady-state condition is attained, if the first order model prevails, substrate removal rate can be predicted by Eq. where K 1 is the first order constant and expressed as (d −1 ); S and S 0 are substrate concentrations in effluent and influent, respectively; Q is the hydraulic loading rate (L.d −1 ); and HRT is hydraulic retention time (day). For predicting substrate removal rate in biofilm based reactors, Stover-Kincannon model was modified and linearized as Eq. (2). where U max is the maximum substrate removal rate (mg COD/L.d), K B is the saturation value constant (mg/L.d), and V is the reactor volume (L). Under steady-state conditions, if the second order model prevails, substrate utilization rate can be predicted by Eq. (3) and, subsequently, second order coefficient (K G ) which is expressed as (d −1 ) can be determined by Eq. (4). Half saturation constant (K s ) which is expressed in (mg.L −1 ) and overall reaction rate (K) which is expressed in (d −1 ) for attached growth bioreactors under steady-state conditions (ds.dt −1 = 0) can be determined by combining Eqs. (5) and (6) which are mass balance and Monod equations, respectively: where V is the reactor volume (L); X (A) represents attached biomass per area of fixed bed (g VS.m −2 ); A is the available surface area (m 2 ); and r su is the substrate utilization rate (mg.m −2 .d −1 ). When steady-state conditions are attained, the rate of substrate concentration change in Eq. (5) is negligible (ds.dt −1 = 0), and Eqs (5) and (6) can be combined and rearranged as Eq. (7). Substituting from Eq. (6) with r su from Eq. (5) will yield Eq. (7) as follows: (K) and (K S ) kinetic constants can be calculated from slope and intercept of linear regression of Eq. (7), respectively. Biomass yield coefficient (Y) and biomass decay rate (K d ) which are expressed in (g VS produced/g substrate utilized or dimensionless) and (d −1 ) can be determined by using biofilm mass balance equation and Monod growth kinetic, which can be written as Eqs (8) and (9). When steady-state conditions (dX.dt −1 = 0) are attained, Eq. (10) which is a linearized equation can be obtained by substituting − ⋅ ⋅ Y(r ) K A X su d ( A) from Eq. (9) with r g from Eq. (8), as follows: Subsequently, (Y) and (K d ) can be determined by linear regression of plotted line of (S 0 -S/X) versus X QX att . X att which represents attached biofilm mass is expressed as (g VSS) and calculated by multiplying (A) and (X A ). Finally, maximum specific growth rate coefficient (μ m ) can be determined by multiplying (Y) and (K) coefficients, Eq. (11) 29 : Scientific RepoRts | 7:41020 | DOI: 10.1038/srep41020 Results and Discussion AnFFFBR performance evaluation. Anaerobic fixed film fixed bed reactor performance was evaluated under different substrates including DAP, DEP, DMP, and PA, and different hydraulic retention times (HRTs). In order to evaluate removal efficiencies and kinetic behavior of mentioned substrates, effluent concentrations of phthalates, COD, and TOC were analyzed as bioreactor responses. Moreover, attached and sloughed biofilm masses were measured. The experimental results for AnFFFBR performance are shown in Tables 1 and 2. The experimental results for all four study steps (A to D) indicate that phthalates' removal increased from study runs 1 to 5, as HRT increased. Moreover, COD and TOC removal of all the substrates showed similar behavior. It could be due to higher bio-availability of biofilm mass to the substrates, and providing a higher contact time for secreted enzymes from microbial mass 3 . According to the comparison of AnFFFBRs' performance in term of phthalates, COD, and TOC removal, it can be concluded that this bioreactor has a better performance for phthalates' removal than for their mineralization. For instance, in study phase (B-1) with DMP as substrate, the DMP removal was 71.26% which is considerably higher than 46.5% of TOC removal. In addition, with increasing HRT, the difference between phthalates and their mineralization decreased (e.g. with DMP as the substrate, the difference between DMP and its TOC removal decreased from 24.76% (study phase B-1) to 12.11% (study phase B-5)). This demonstrates that the performance of AnFFFBR is significantly dependent on HRT. Higher phthalates' removal rather than their mineralization may be related to their benzene (aromatic) ring which is more bio-refractory than their side alkyl (ester) chains. It is in good agreement with the experimental observations of Spagni et al. which reported that the formed aromatic compounds are refractory to anaerobic digestion 30 . In addition, this can be supported by other studies which stated that aromatic compounds are refractory [31][32][33] . Among the four selected phthalates, the maximum removal efficiency was observed with DMP as substrate, which was 92.5, 84.41, and 80.39% for DMP, COD, and TOC removal (study phase B-5), respectively. On the other hand, the minimum removal efficiency was observed with DAP as substrate, which was 59.43, 51.86, and 39.74% for DAP, COD, and TOC removal (study phase A-1), respectively. One-way ANOVA analyses for COD, TOC, and phthalate removal efficiencies originated from different phthalates showed that removal efficiencies of the mentioned parameters for DEP, PA, and DMP were statistically different from those of DAP in all study phases (P-value < 0.05). It should be highlighted that, although one-way ANOVA indicates that some study phases for DMP and PA removal were statistically different (e.g. the COD and phthalate removal in HRTs of 24 and 30 h for DMP were statistically higher compared with PA), there were other study phases in which the performances of bioreactor were close. However, further comparison of these two phthalates should be assessed with their kinetic coefficients. According to the obtained results, the AnFFFBR with DMP and PA had better phthalates removal performance than with DEP and DAP. By comparing physico-chemical properties of selected phthalates as presented in previous studies 17,34 a correlation was observed between alkyl chains' length, octanol-water partition coefficient (K OW ), and molecular size of these phthalates and their removal. It can be concluded that phthalates with shorter chain lengths, lower molecular sizes, and lower octanol-water partition coefficients can be removed in a higher rate. Although the PA has the lowest molecular weight and K OW and the shortest alkyl chains, the DMP removal was performed in a slightly higher rate (except for study phase B-1 compare to D-1) which may correlate with the higher solubility of DMP (4200 mg/L) compared with PA (625 mg/L) 17,34 . DMP solubility (which is the highest solubility among selected phthalates) can lead to higher diffusion of this substrate and consequently higher bio-availability for biofilm mass to utilize it. Despite the fact that the solubility of these phthalates can influence bioreactor performance, comparing biodegradation of DEP which has a higher water solubility (1000 mg/L) and PA (which has a higher removal rate) can demonstrate that alkyl chains' length has an independent effect on phthalates' removal. It should be noted that for most of the study phases, one-way ANOVA results for phthalate and TOC removal efficiencies originated from DMP and PA were statistically different when compared with DEP (P-value < 0.05). The solids' retention time (SRT) which represents the average presence time of the microbial mass in the bioreactor 19 , increased as HRT increased (e.g. from 18.7 to 30.53 (day) with PA as substrate). This increase can be another reason for improving phthalates' removal in higher HRTs. Moreover, the obtained SRTs for this study were considerably more than the SRTs needed for wastewater treatment in other conventional treatment plants such as activated sludge, due to the type of biofilm growth which is attached in this study 19 . It has been reported that typical SRT values in the range of 4 to 10 days can result in carbon oxidation in full scale aerobic wastewater treatment plants, while its values should be long enough for removing refractory compounds 35 . This can demonstrates the AnFFFBR has significant ability to tolerate and remove all the selected phthalates. In addition, changes in attached biofilm mass (VS, mg) in all phases of the study were lower than the organic loading rates (e.g. with DMP as substrate, the attached mass (mg VS) only decreased nearly 1.78 times, while the organic loading rate (OLR) reduced three times from study phase B-1 to B-5). The rate of adsorbed phthalates' concentration to the sloughed biofilm (produced sludge) is also another important factor which must be considered, especially whenever the sludge is applied for soil conditioning. As PAEs have high K OW , they tend to accumulate in soils or other organic constitutes. According to the experimental results, all phthalates and especially DAP enriched in sloughed biofilm. The maximum concentration of phthalates in sloughed biofilm belonged to DAP as 11.9 (mg DAP/g TSS) in study phase A-1. Higher K OW of DAP can result in higher hydrophobicity of this phthalates which consequently leads to more biosorption of this phthalate to microbial mass 17 . The maximum adsorbed phthalates was observed for DAP as 11.9 mg DAP/mg sloughed biofilm (study phase A-1) and, by contrast, the minimum concentration was observed for PA as 1.8 mg PA/mg sloughed biofilm. It should be noted that all adsorbed phthalates' concentrations were significantly higher than the amounts reported in previous studies (e.g. the highest PAEs concentration in primary and secondary sludge was reported as 1250 mg PAEs/Kg sludge 36 ). It indicates that all produced sludge of AnFFFBR must be digested before disposal or application to the land. The main gasses detected in collected biogas for all selected phthalates were similar and included methane, carbon dioxide, hydrogen, and hydrogen sulfide. The biogas analysis results represent that methane production rate and its concentration (or its percentage) in biogas were very sensitive to HRT and OLR, and in lower HRTs and higher OLRs methane production rates were considerably reduced for selected phthalates. Reducing methane yields in higher OLRs (lower HRTs) might be related to the toxic effects of phthalates on methanogen bacteria, because similar results were recorded for all of the selected phthalates. Moreover, according to the obtained results, in lower HRTs and consequently higher hydraulic loading rates, the SRTs decreased significantly, which can produce unfavorable conditions for growth and activity of methanogen bacteria. It is well known that methanogens are more sensitive to environmental conditions and have slower growth rates 19,[37][38][39] . In addition, it has been reported that increasing OLR of some specific wastewaters can decrease the specific activity of methanogen bacteria 40,41 . Similar results have been reported by Chaisri et al. for anaerobic filter, and higher organic loading rates decreased methane production rates 42 . The maximum methane production rate for DAP, PA, DEP, and DMP were observed as 0.35, 0.4, 0.43, 0.46 L CH 4 /g COD rem with HRT of 36 h, respectively. The obtained methane yields for study phases (B-5), (C-5), and (D-5) were above the theoretical methane production which is usually expected to be around 0.35 L CH 4 /g COD 43 . Considerable obtained methane yields can be as the result of high SRTs observed for this bioreactor, which can improve the presence of slow-growing bacteria such as methanogens 44 . It has been known that biofilm-based processes are useful, especially when slow-growing bacteria have to be kept in the bioreactor 45 . Moreover, the phenomenon of bacterial cell death and subsequently lysis which can be intensified in higher SRTs, especially in inner layer of biofilm, may result in releasing more substrates as C 5 H 7 NO 2 19 which can be used by other active bacteria and consequently produce more biogas. These hypotheses can be confirmed, since study phases (B-5), (C-5), and (D-5) which were operated in maximum HRTs, had the lowest volatile suspended solids (VSS e ) in their effluents as 36, 48, and 46 mg.d −1 , respectively. There are other studies which had reported that co-digestion of wastewater with volatile solids (VS) can significantly increase methane production rates. These low VSS e can demonstrate that more bacterial remained tissues were converted to final and mineralized products such as methane, carbon dioxide, and other inert minerals. As an example, Tezel et al. reported that co-digestion of four different samples of wastewater with initial COD concentrations of 0.6 to 1.53 g COD/L and VS of 0.1 to 0.71 g VS/L can lead to methane production rates up to 0.98 (L CH 4 /g VS added) 46 . Furthermore, there are other studies that reported methane yields between 0.41 and 0.66 L CH 4 /g COD for anaerobic wastewater treatment [47][48][49] . Additionally, in most study phases, methane production rates with DAP as substrate were considerably lower than in comparison with other substrates. As a result, according to the potential of biogas production, DMP can be considered as a suitable substrate due to its higher methane production rate. The proposed biological pathway of phthalates is presented in Fig. 1. A total of four primary intermediates were identified during DAP biodegradation including mono-allyl phthalate, dimethyl phthalate, mono-methyl phthalate and PA. According to the results, DAP was initially metabolized to mono-allyl phthalate and then the bioreaction moves forward to produce phthalic acid. This could be caused by enzymatic ester hydrolysis which commonly known as a de-esterification (Ahuactzin-Pérez et al., 2016; Liang et al. 50 , Detecting trace amount of DMP and mono-methyl phthalate can be as a result of second biodegradation route which is termed as de-methylation (it should be noted that mono-allyl mono-methyl phthalate was not detected) 50 . Similar observation was reported by Amir et al. where DMP and dibutyl phthalate (DBP) were observed as by-products of diethyl hexyl phthalate (DEHP) biodegradation during sludge composting 51 . However, many studies reported that the de-esterification is the main route in phthalates biodegradation 50 . Similar pathway observed for DEP biodegradation and mono-ethyl phthalate, mono-ethyl mono-methyl phthalate, DMP, mono-methyl phthalate, and PA were detected as its primary metabolites. In addition, mono-ethyl phthalate and PA were the main by-products and this can indicate that de-esterification is the main route of DEP biodegradation. Furthermore, DMP metabolite analyzing indicates that removing its side chains (de-esterification) is the only route of biodegradation as the mono-methyl phthalate and phthalic acid were detected as its only primary by-products. Analyzing by-products of DAP, DEP and DMP biodegradation can demonstrate that these compounds have similar pathways as detachment of side alkyl chains (de-esterification) is the main route for their biodegradation and phthalic acid was detected as a central metabolite in their biodegradation. In addition, phthalic acid biodegradation led to producing 3,4-dihydroxybenzoic acid (protocatechuic acid), catechol, benzoic acid, 4-hydroxyphthalic acid (4-hydroxyphthalate), 4,5-dihydroxyphthalic acid (4,5-dihydroxyphthalte), 3,4-dihydroxyphthalic acid (3,4-dihydroxyphthalate), etc. Some of these detected metabolites including benzoic acid, 3,4-dihydroxyphthalate, catechol and 4,5-dihydroxyphthalate were similarly reported by other studies conducted for biological removal of phthalates 50,52 . The biodegradation continued with benzene ring cleavage of mentioned metabolites produced from PA decomposition, and formed other by-products such as 2-hydroxymuconic semi aldehyde. Then, the pathway continued by producing common volatile fatty acids (VFAs) of anaerobic wastewater treatment including acetic acid, propionic acid, butyric acid, etc. These detected VFAs are common by-products of anaerobic biodegradation of organic compounds 19 . Eventually, acetogens consume these VFAs and produce acetate, hydrogen, carbon dioxide, etc, which are the precursors of methane production 19,53 . Finally, biodegradation pathway ended with producing carbon dioxide, methane and Scientific RepoRts | 7:41020 | DOI: 10.1038/srep41020 hydrogen which were detected in biogas as previously stated. These final products indicate that biological removal of phthalates with AnFFFBR can be considered as a reliable, safe, and eco-friendly treatment. Furthermore, it should be mentioned that other non-phthalate metabolites such as VFAs and methane can be produced at each step of biodegradation pathway and not solely at it end. Kinetic evaluation of phthalates removal. The first order model for selected phthalates has been shown in Fig. 2. According to the obtained first order coefficients (K 1 ), the COD removal for selected phthalates increased stepwise in the following order: DAP, DEP, PA, and DMP, which confirmed the obtained results; nevertheless, this model still cannot be applied due to its relatively low degree of precision (nearly R 2 < 0.8). In order to calculate the maximum substrate removal rate (U max ) and saturation value constant (K B ), Eq. (2) was plotted in Fig. 3. The maximum U max was observed for DMP as 1.162 mg COD/L.d which demonstrates that DMP has the highest biodegradability. Moreover, this model has high coefficients of determination for all selected phthalates (0.96 ≤ R 2 ≤ 0.98), which means its use is preferred to the first order model for predicting phthalates' removal in AnFFFBR; but the U max value for PA (0.8 mg COD/L.d) was lower than that of other phthalates, which contradicts the obtained results. This reflects the disadvantage of Stover-Kincannon model when this model is applied for biofilm reactors. It should be noted that this model does not involve the effect of microbial mass and had been developed solely based on organic loading rate 3 . In this regard, it will be hard to design biofilm base reactors without considering the impact of available surface area and microbial mass, and it should only be used for predicting bioreactors' substrate removal rate. Figure 4 represents the graph plotted for calculating second order coefficient (K G ). Considering the high coefficients of determination obtained for all phthalates' removal (R 2 > 0.96), this model can be used for predicting and describing AnFFFBR performance with a high degree of precision. Moreover, the obtained K G which represents the substrate utilized per each unit of microbial mass was in good accordance with the obtained results. Based on the K G values which are shown in Table 3, DMP had the highest biodegradability and utilization in AnFFFBR. The biodegradability of phthalates were measured in the following order: DMP, PA, DEP, and DAP, and K G values were 3.52, 3.11, 3.00, and 2.86 (d −1 ), respectively, in optimum conditions (study run 5 for all phthalates). The obtained K G values of all substrates were raised significantly as HRT increased. This may be the result of higher bioavailability of microorganisms to the substrate, which consequently increases the substrate biodegradation. In order to calculate the half saturation constant (K S ) and overall reaction rate (K), Eq. (7) was plotted in Fig. 5. The obtained overall reaction rate values demonstrate that DMP (1.194 d −1 ) and DAP (1.027 d −1 ) had the maximum and minimum biodegradation rates, respectively. Moreover, the K S values obtained for DMP (28.57 mg/L) and PA (27.7 mg/L) were significantly more than that of DAP (15.29 mg/L); however, these values cannot be directly evaluated as their maximum specific growth rate (μ m ) and substrate utilization rate (r su ) are not equal. If the mentioned parameters (μ m and r su ) for these phthalates were equal, then microbial community for the substrate with lower K S would have more affinity with that substrate in low substrate conditions. In fact, the Scientific RepoRts | 7:41020 | DOI: 10.1038/srep41020 microbial competition for substrate in the biological process becomes a relevant phenomenon, and the organism with the lowest (K S ) has the highest affinity with the substrate and will outcompete other organisms present in that culture 54 . Figure 6 represents yield coefficient (Y) and decay rate (K d ) of biomass with different substrates. Maximum and minimum yields were observed for DMP and DAP as 0.161 and 0.139, respectively. The yield coefficients with DEP (0.15) and PA (0.152) as substrate were not significantly different. The determined yield coefficients were much lower than aerobic yields reported for conventional wastewater treatments which is typically 0.6 19 . The yield coefficient for aerated submerged fixed-film reactor (ASFFR) with glucose as substrate was reported as 0.63 which is more than four times the yield coefficients calculated in this study. The observed yield coefficients were more than two times other conventional anaerobic processes which are about 0.06 55 . However, other studies reported that yield coefficients for anaerobic conditions can reach 0.3 and higher, which is significantly higher than what was observed in the present study [56][57][58] . This is an important point of view, because excess sludge treatment and disposal can cause many economic, environmental, and regulation challenges for wastewater treatment plant 59 . Furthermore, the calculated yield coefficients of the present study are close to those of novel technologies such as aerated submerged membrane bioreactor with yield coefficient of about 0.15 60 . According to previous comparisons, AnFFFBR can be introduced as a promising bioreactor, especially because of sludge management. Moreover, decay rates were not considerably different among all phthalates except for DMP. A higher decay rate for DMP may be related to its higher substrate utilization rate which consequently leads to a higher growth rate and, considering the constant available surface area for biofilm growth, more microbial mass may be placed in the inner layer and decay, as it did not access the substrate. Table 4. Maximum specific growth rate estimated for AnFFFBR. The maximum specific growth rate (μ m ) estimated for selected phthalates are shown in Table 4. The (μ m ) values were: 0.193, 0.170, 0.156, and 0.143 d −1 , which were observed for DMP, PA, DEP, and DAP, respectively. These coefficients confirm the recorded COD results. Conclusions Although all selected phthalates showed acceptable biodegradability, among the four, DMP can be selected as the best substrate due to its highest biodegradability, mineralization, and methane production. Moreover, DMP had the highest values of overall reaction rate and maximum specific growth rate, which confirms that this substrate is more biodegradable than others. According to the results of mathematical modeling, Grau model can be selected as the best model for designing and predicting the AnFFFBRs' performance due to its strong coefficients of determination. Finally, selected phthalates had similar biodegradation pathways and the main route was the detachment of side alkyl chains (de-esterification).
v3-fos-license
2021-09-04T13:44:01.176Z
2021-09-03T00:00:00.000
237405694
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.704247/pdf", "pdf_hash": "61a3818c3e5f06859d9c1caf992a67e6e7582857", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41708", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "61a3818c3e5f06859d9c1caf992a67e6e7582857", "year": 2021 }
pes2o/s2orc
Staphylococcus aureus Putative Vaccines Based on the Virulence Factors: A Mini-Review Since the 1960s, the frequency of methicillin-resistant Staphylococcus aureus as a recurrent cause of nosocomial infections has increased. Since multidrug-resistant Staphylococcus has overcome antimicrobial treatment, the development of putative vaccines based on virulence factors could be a great help in controlling the infections caused by bacteria and are actively being pursued in healthcare settings. This mini-review provides an overview of the recent progress in vaccine development, immunogenicity, and therapeutic features of some S. aureus macromolecules as putative vaccine candidates and their implications against human S. aureus-related infections. Based on the reviewed experiments, multivalent vaccines could prevent the promotion of the diseases caused by this bacterium and enhance the prevention chance of S. aureus infections. INTRODUCTION Staphylococcus aureus is a widespread commensal and pathogen bacterium. S. aureus bacteria induce staph food poisoning that leads to gastrointestinal illness through eating foods contaminated with the toxins produced. About 25% of animals and people have staph in their nose and on their skin (Le Loir et al., 2003). It is also one of the most isolated bacteria among both nosocomial and community-acquired infections. It causes many types of human infections and syndromes such as mild skin and soft tissue infections, bacteremia, endocarditis, pneumonia, metastatic infections, sepsis, and toxic shock syndrome (van Belkum, 2006). A hospital environment and medical devices contaminated with S. aureus can affect the health of patients. Over the past decades, staphylococcus nosocomial infections have significantly increased (Kuklin et al., 2006;Hogea et al., 2014). Since the 1960s when the first methicillin-resistant S. aureus (MRSA) was identified, a major challenge has begun (Adhikari et al., 2012). The emergence of antibiotic-resistant strains of staphylococci, mainly MRSA, emphasizes the serious control of S. aureus-related infections (O'Neill et al., 2008)-for example, the outbreak of S. aureus bloodstream infections in the United States in 2017 induced nearly 20,000 deaths (Kourtis et al., 2019). However, there is no current vaccine for S. aureus infection. Several S. aureus virulence factors have been evaluated as vaccine candidates. Infections caused by MRSA in hospital wards have decreased due to increased health assessments and the presentation of effective vaccines. Staphylococcus spp. conserved surface components with a high rate of expression in the bloodstream or biofilm-forming process factors stand as suitable staphylococcal candidate vaccines to decrease the staphylococcal disorders (Van Mellaert et al., 2012;Hogea et al., 2014). Thus, it is essential to know the relevant factors involved in biofilm formation from a molecular pathogenesis perspective and to discover the physiological status of these virulence factors within the body in order to realize whether they have the potency to develop an aggressive behavior. Vaccine Development Based on the Targets Many investigations have put forward a large number of targets for vaccine development against S. aureus, which increase the number of putative targets. In the classical approach, different targets with certain functions have been studied and evaluated as subunit vaccine. New target candidates have also been suggested by reverse vaccinology and bioinformatics (Zhang et al., 2003;Bowden et al., 2005;Gill et al., 2005). In order to cover the genetic diversity of a pathogen in vaccine development strategies, its pan-genome should be analyzed, and its molecular epidemiology should also be examined (Mora and Telford, 2010). The Search for Vaccine Targets Poly(glutamic acid) (PGA) stands for a good vaccine candidate against the mentioned bacterium, owing to its protection effects against antimicrobial peptides during biofilm-related infections and neutrophil phagocytosis. The result of an experiment indicated that arisen antibodies to conjugated PGA are able to protect three models of animals, including guinea pig, mouse, and rabbit, against anthrax (Joyce et al., 2006). Phenol-soluble modulins (PSMs) are considered as another promising group as vaccine target. Recently, a study showed that PSMβ peptides had an inhibitory effect on bacterial dissemination from implants (Rennermalm et al., 2004;Wang et al., 2011). Unlike most mentioned vaccine candidates, PSMβ interferes the dissemination of biofilm-associated infection via preventing detachment mechanisms. Capsular Polysaccharide The function of conjugated microencapsulated S. aureus type 8 (the isolate came from bovine mastitis milk) to Pseudomonas aeruginosa exotoxin A (ETA) was assessed in a mouse model. The antibody response was triggered 3 days following the immunization and lasted for 13 days of the observation period after the second injection in some mice. The antibody response and the survival rate were higher in the group of mice immunized with the CP8-ETA conjugates in comparison with those receiving complete Freund's adjuvant or phosphate-buffered saline. Based on the result of this experiment, the CP8-ETA vaccine is able to protect mice against S. aureus bacteremia (Han et al., 2000). Iron-Regulated Surface Determinant B The S. aureus iron-regulated surface determinant B (IsdB), a prophylactic vaccine against S. aureus infection, as an ironsequestering protein exists in many S. aureus clinical isolates and methicillin-resistant and methicillin-sensitive isolates and is expressed on the surface of all tested isolates. As the mice were immunized with IsdB formulated with amorphous aluminum hydroxyphosphate sulfate, high immunogenicity of IsdB in rhesus macaques was observed. Furthermore, a fivefold increase in antibody titers was seen after a single immunization, which indicates IsdB potency as a vaccine against S. aureus disease in humans (Jones et al., 2001;Kuklin et al., 2006). A randomized study on the preoperative receipt of Merck V710 S. aureus vaccine containing non-adjuvanted IsdB demonstrated that all V710 recipients and only about 8% of the placebo recipients died of postoperative S. aureus infection following a major cardiothoracic surgery. These results may raise the concern of researchers about the immunization itself, which might affect either the safety or the efficacy of the development of staphylococcal vaccines (McNeely et al., 2014;Daly et al., 2017). In another cohort study, in spite of modern perioperative management, postoperative S. aureus infection occurred in 1% of adult patients. The mortality rates were also 3% for methicillinresistant S. aureus infections and 13% for MRSA infections (Allen et al., 2014). Virus-Like Particle-Based Vaccines The coordination of the expression of the required virulence factors in the invasive infection of S. aureus happens using secreted cyclic auto-inducing peptides (AIPs) and the accessory gene regulator (agr) operon. AIPs are small in size and require a thiolactone bond. In order to solve this issue, the viruslike particles were utilized as a vaccine platform (PP7) for a conformationally restricted presentation of a modified AIP1 amino acid sequence (AIP1S). AIP1-specific antibodies inhibited agr activation in vivo; moreover, it reduced pathogenesis and increased bacterial clearance in murine skin and a soft tissue infection model carrying a highly virulent agr type I S. aureus isolate, which all indicated vaccine efficacy and that it might have a great impact on antibiotic resistance (Daly et al., 2017). AaP, accumulation-associated protein; TA, teichoic acid; mAB, monoclonal antibody. Staphylococcus aureus Alpha-Hemolysin Based on the results of previous studies, a recombinant vaccine for S. aureus alpha-hemolysin should have a heptameric structure for its crystal. HIa, a pore-forming toxin, is expressed by the majority of S. aureus strains. HIa was examined for vaccination with AT-62aa along with a glucopyranosyl lipid adjuvant-stable emulsion. Then, the results indicated that sepsis protection in an experimental model of S. aureus infection was done by utilizing Newman and the pandemic strain USA300 (LAC). This model demonstrated the AT-62aa is a proper vaccine candidate. The identification of AT-62aa protective epitopes may also result in novel immunotherapy for S. aureus infection (Adhikari et al., 2012). Staphylococcus aureus LukS-PV-Attenuated Subunit Vaccine LukS-mut9 is an attenuated mutant of LukS-PV with a high immunogenic response. This mutant has shown significant protection in mouse sepsis model. Recent findings revealed that the protection of the Panton-Valentine leukocidin (PVL) vaccine in mice model is related tocross-protective responses against other homologous toxins, owing to the generated polyclonal antibodies by LukS-mut9, which can neutralize other canonical and non-canonical leukotoxin pairs. There has been a correlation between the arisen antibodies, PVL subunits, and sepsis in patients with high antibody titer against the mentioned subunits (Adhikari et al., 2012). Four-Component Staphylococcus aureus Vaccine In a study conducted based on a murine S. aureus infection model, antigen-specific antibodies were accumulated in the pouch, and the infection was mitigated following immunization with 4CStaph and bacterial inoculation in an air pouch generated on the back of the animal. The upregulation of FcR and the presence of antigen-specific antibodies induced by immunization with 4CStaph could increase bacterial opsonophagocytosis. Alternative protection mechanisms may be activated by a proper vaccine, balancing neutropenia, which is a condition often happening to S. aureus-infected patients (Torre et al., 2015). The Mixture of PBP2a and Autolysin as a Candidate Vaccine Against Methicillin-Resistant S. aureus Based on a study, the mortality rate was reduced in mice, and they were protected against lethal MRSA challenge as well as single proteins following an active vaccination with a mixture of r-PBP2a/r-autolysin and a conjugated form of the vaccine (Haghighat et al., 2017). Some of the selective putative vaccine candidates and a summary of the vaccine candidate development in S. aureus are listed in Table 1 and Figure 1. LIMITATION Several vaccine candidates which are of recent progress in vaccine development are only presented in this study. Therefore, more explanation was not mentioned about the general function of the vaccine candidate molecules, particularly with regard to PSM, ETA, IsdB, alpha-hemolysin, LukS-PV, PBP2a, and autolysin. CONCLUSION Vaccine development against staphylococcal infections is still in its infancy. Irrefutably, more studies on staphylococcal virulence factors and immune evasion are required to enable us to reach a complete understanding of the virulence mechanisms. Since numerous changeable infection-related factors exist and also expressed in staphylococcal species, multivalent vaccines consisting of several antigens related to different infection stages are needed. There are few ways to deal with S. aureus infections due to their high antibiotic resistance and also because the infections caused by this microorganism are increasing. However, fortunately, since sufficient research has been done on the effects of various vaccine candidates regarding the S. aureus virulence factor, the capability of biofilm production could be noticed as one of the most important factors in bacterium colonization as well. If a suitable vaccine candidate can be included to (1) inhibit biofilm formation and (2) prevent the effect of bacterial virulence factors, then the possibility of preventing and eliminating infections can be imagined. It is expected that designing a multivalent vaccine with the above-mentioned content will raise the effectiveness of antibodies and lead to the eradication of S. aureusrelated infections. AUTHOR CONTRIBUTIONS BM contributed to conceptualization, data collection, data curation, and writing of the manuscript. RB, HZ, and MD contributed to data collection. AS contributed to data collection and writing of the manuscript. All authors read and approved the manuscript.
v3-fos-license
2018-04-03T00:39:14.511Z
2017-09-06T00:00:00.000
3281382
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1038/s41598-017-10237-w", "pdf_hash": "487fbc4c4f5600530b289e7bcb9c2ce4cf31a0d3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41710", "s2fieldsofstudy": [ "Medicine" ], "sha1": "57b3cddd9e390746b5ff7d4a52e8c6fa1d601072", "year": 2017 }
pes2o/s2orc
In Vivo Confocal Microscopy Evaluation of Ocular Surface with Graft-Versus-Host Disease-Related Dry Eye Disease Dry eye disease (DED) is often elicited by graft-versus-host disease (GVHD), an extensive complication of hematopoietic stem cell transplantation (HSCT). To unravel the mechanism of this type of DED, in vivo confocal microscopy (IVCM) was used to investigate alterations in the state of the sub-basal nerves, dendritic cells (DCs) and globular immune cells (GICs) in the central cornea and limbal epithelia. In this study, we examined 12 HSCT recipients with GVHD-caused DED and 10 HSCT recipients without GVHD-associated DED and evaluated the clinical parameters in the 2 groups. Analysis of the central cornea and limbal epithelia using IVCM was conducted to investigate the density of the corneal sub-basal nerves, DCs and GICs as well as the tortuosity and branching of the sub-basal nerves. As suggested by our data, the clinical variables in the GVHD group were significantly different from those in the non-GVHD group. Additionally, GVHD-triggered DED conceivably increased the density of DCs and GICs in the central cornea and the density of DCs in limbal epithelia and altered the morphology of the sub-basal nerves. These phenomena are presumably correlated with the degree of inflammation. Thus, our findings may be translated into non-invasive diagnostic methods that indicate the severity of inflammation on the ocular surface in HSCT recipients. Hematopoietic stem cell transplantation (HSCT) is widely used for the treatment of hematological malignancies and several benign hematopoietic diseases. Technology has progressed considerably during the last decades, and consequently, the success rate of HSCT itself has substantially improved 1, 2 . However, medical settings are still required to combat disabling complications resulting from HSCT. One of the most notorious disorders is graft-versus-host disease (GVHD), which can affect multiple organs. GVHD is one of the leading causes of morbidity after HSCT and occurs in 10-90% of HSCT recipients 3,4 . The eye is one of the most GVHD-vulnerable organs, and 40-60% of patients who undergo HSCT develop ocular GVHD 3 . In many cases, patients with ocular GVHD suffer from ocular disorders, visual deterioration, lacrimal gland dysfunction, meibomian gland dysfunction, corneal epitheliopathy, conjunctival scarring, and hyperemia. These conditions induced by ocular GVHD have a large detrimental impact on patients' quality of life 5,6 . Although several effective medications have emerged 7,8 , the exact pathogenesis of ocular GVHD remains to be elucidated. Previous studies suggest that donor-derived immunocompetent cells attack host tissues and presumably play a contributory role in the development of GVHD 9,10 . Because there are limited data evaluating the ocular surface in GVHD patients, an investigation into GVHD-elicited changes of the ocular surface at the cellular level could provide some insights into the developmental process that leads to ocular GVHD. In vivo confocal microscopy (IVCM) provides high-resolution images in a non-invasive manner and enables an examination of the ocular surface at the cellular level. In many previous studies, ICVM has been used to assess cellular and structural changes of the ocular surface in patients with various diseases, such as herpes simplex virus (HSV) keratitis, corneal graft rejection, rosacea and dry eye disease (DED), especially chronic GVHD-related dry eye disease [11][12][13][14] . GVHD-induced changes in the state and/or behavior of corneal sub-basal nerves, corneal epithelial dendritic cells (DCs), and conjunctival epithelial immune cells have been reported 14 . In this study, the status of the ocular surface in HSCT recipients with GVHD-induced DED was compared with those without DED. Considering that GVHD patients undergo total body irradiation (TBI), usually receive immunosuppressive therapies and have underlying hematopoietic diseases, alterations in the condition of the ocular surface may be induced by these factors 3,15 . This clinical study comprehensively focused on these matters. Consequently, the results may potentially elucidate the link between GVHD-triggered DED and the deterioration in the state of the ocular surface in HSCT recipients. Results Twenty-two eyes from 22 HSCT recipients (10 males and 12 females) with GVHD-related DED (n = 12) and without GVHD-related DED (n = 10) were examined. The patients' characteristics in both groups are shown in Table 1. With regard to the types of HSCT, the subjects in the GVHD group underwent bone marrow transplantation (BMT, n = 8) or peripheral stem cell transplantation (PBSCT, n = 4), and those in the non-GVHD group received BMT (n = 5), PBSCT (n = 1) or umbilical cord blood transplantation (CBT, n = 4). Primary hematological diseases in the GVHD group were acute myeloid leukemia (AML, n = 4), myelodysplastic syndrome (MDS, n = 4), acute lymphocytic leukemia (ALL, n = 2), chronic myeloid leukemia (CML, n = 1) and peripheral T-cell lymphoma (PTCL, n = 1), and those in the non-GVHD group were AML (n = 3), Hodgkin lymphoma (HL, n = 1), CML (n = 1), ALL (n = 2), chronic myelomonocytic leukemia (CMML, n = 1), aplastic anemia (AA, n = 1) and non-Hodgkin lymphoma (NHL, n = 1). There was no significant difference in age range (p = 0.473), gender diversity (p = 1.000), time-period after HSCT (p = 0.660), donor type (p = 1.000), systemic immunosuppressive treatment (SIT, p = 0.099) or ratio of TBI (p = 0.096) between the GVHD and non-GVHD groups. The topical medications used by the patients are shown in Table 2. Four patients in the control group, whose ocular symptoms and signs were minimal, were medicated with sodium hyaluronate. However, these signs and symptoms did not arise from DED; they were induced by other factors such as tiny mechanical damage. Sodium hyaluronate was prescribed to protect the mucosa of the ocular surface and completely relieved the symptoms in these patients. Clinical variables and IVCM parameters are depicted in Table 3. The clinical variables, namely, ocular surface disease index (OSDI), meibomian gland dysfunction score (MGDs), cornea fluorescein staining (CFS), conjunctival lissamine green staining (CLGS) and hyperemia, in the GVHD group were significantly higher than those in the non-GVHD group. Both the Japanese dry eye score (JDEs) and the international chronic ocular GVHD scores (ICOGs) in the GVHD group were significantly higher than those in the non-GVHD group. These latter two parameters were significantly associated with all IVCM parameters except the density of the sub-basal nerves. In contrast, the opposite was seen with Schirmer's test and tear film break-up time (BUT). As for the results of the IVCM images, DCs with typical branching dendrites in the basal layer of the epithelia in the central cornea were observed in chronic GVHD (cGVHD)-related DED patients (Fig. 1a). In addition, DCs with small cell bodies and short dendrites in the basal membrane in the central cornea were identified (Fig. 1b). DCs with highly reflective cell bodies and branching dendrites in the basal layer in the limbus were also apparent (Fig. 1c). Moreover, globular immune cells (GICs) co-existed with DCs in the basal membrane layer in the central cornea (Fig. 1d). The GVHD group displayed a significantly higher level of tortuosity (Fig. 2a,b) and branching of the sub-basal nerves (Fig. 2a,b) compared with the non-GVHD group (Fig. 2c,d). The density of GICs in the central cornea ( Fig. 3e,f), the density of DCs in both the central cornea ( Fig. 3a,b) and the limbus (Fig. 3i,j) were significantly higher in the GVHD group compared with the non-GVHD group (Fig. 3c,d,g,h,k,l). Significant correlations were mainly noticed between dry eye parameters and IVCM variables, including the density of DCs and GICs in the central cornea, the density of DCs in the limbal epithelia, and tortuosity and branching of the sub-basal nerve. Detailed correlations between IVCM variables and dry eye parameters are shown in Table 4. These results suggest that more severe infiltration of DCs and GICs on the ocular surface occurs in GVHD patients compared with non-GVHD patients. Furthermore, significant alterations in corneal innervation occur in GVHD-affected eyes. In addition, these IVCM parameters were strongly associated with a variety of dry eye parameters. Table 3. Clinical variables and IVCM parameters. *p < 0.05; **p < 0.01; ***p < 0.001. OSDI: ocular surface disease index; BUT: tear film break-up time, CFS: cornea fluorescein staining; CLGS: conjunctival lissamine green staining; MGDs: meibomian gland dysfunction score; JDEs: Japanese dry eye score; ICOGs: International chronic ocular GVHD score; DCs: dendritic cells; GICs: globular immune cells. Discussion Corneal innervation is one of the most important aspects indicating the state of the ocular surface. As evidenced by previous studies using IVCM, various diseases can alter the density, tortuosity and branching of corneal nerves. As indicated by our study, the density of sub-basal nerves in both GVHD and non-GVHD groups was lower than that in normal eyes (21.6-25.9 mm/mm 2 ) 16-20 . Kheirkhah et al. 13 reported that the density of sub-basal nerves in patients with GVHD-caused DED was 16.3 ± 6.1 mm/mm 2 , a value similar to ours and that supports the accuracy of our data. Our literature review revealed that the density of the sub-basal corneal nerve is thought to be positively correlated with the corneal sensitivity 21,22 . Indeed, our earlier work also demonstrated (1) a decrease in corneal sensitivity in HSCT recipients with and without ocular GVHD-induced DED and (2) that there was no significant difference in corneal sensitivity between the two groups 23 . Our new data indicating the decreased density of sub-basal corneal nerves in both GVHD and non-GVHD groups is therefore consistent with that presented in our previous study. Based on these observations, the conditioning regimen that HSCT recipients undergo may well be one of the factors that decreases the sub-basal nerve density. Other research groups have reported that the nerve density in patients with and without Sjögren's syndrome (SS) is not only decreased due to DED 22,24,25 but is also affected by other autoimmune diseases such as rheumatic arthritis, which is not associated with SS 26 . However, we found no significant difference in sub-basal nerve density between the GVHD and non-GVHD groups. Our results also indicated that the density of sub-basal nerves could also be altered by the conditioning regimen HSCT patients receive, and not only by DED-related factors and GVHD-elicited aberrant immune responses. It is conceivable that the changes in sub-basal nerve density caused by GVHD-related DED are overshadowed by other co-existing conditions. However, several other studies found no difference in sub-basal nerve function between DED patients and control subjects 27,28 . These contradictory claims about the density of sub-basal nerves are presumed to arise from the difference in the severity and/or stage of DED. Hence, this discrepancy may partly explain why the density of sub-basal nerves in the GVHD group was not significantly different from that in the non-GVHD groups. We also found that the tortuosity of sub-basal nerves in the GVHD group was considerably higher compared with that in the non-GVHD group. Steger et al. 29 also reported that the tortuosity of sub-basal nerves in patients with cGVHD-related severe DED was vastly greater than that in 6 patients without these conditions. Our result is consistent with those reported by other groups that have undertaken DED studies 25,[30][31][32][33] . In addition, earlier work suggested that the tortuosity of sub-basal nerves was increased in autoimmune diseases such as Grave's orbitopathy 34 and rheumatic arthritis 26 . In addition to the tortuosity, we investigated the branching of sub-basal nerves. Interestingly, our observations suggested that the sub-basal nerves in the GVHD group were branched at a greater level compared with those in the non-GVHD group, while Steger and colleagues 29 reported that cGVHD-related severe DED decreased the number of branched nerves. There are inconsistencies across several published reports regarding DED-induced alterations in nerve branching. Zhang and colleagues 30 reported that basal nerves were substantially more likely to branch in patients with Sjögren's syndrome-associated aqueous tear deficiency (SS-ATD) compared with their counterparts without these conditions. In contrast, another study suggested that the degree of nerve branching in DED patients was lower in comparison to that in normal subjects 22 . Lastly, several groups reported no significant difference in nerve branching between patients with DED and normal control subjects 24,25 . The increased tortuosity and branching of sub-basal corneal nerves are thought to result from nerve regeneration caused by the simultaneous action of damaging phenomena and nerve growth factors (NGF) and/or cytokines secreted on the ocular surface during the inflammatory process 30,31 . Thus, the difference in the stage and severity of DED and the degree of localized inflammation caused by DED presumably account for the inconsistent findings of several research groups about changes in the sub-basal nerves. There is sparse information about the exact mechanisms underlying the morphological changes in sub-basal nerves induced by ocular surface diseases. Furthermore, it is unknown whether the alterations are linked with clinical signs and symptoms of ocular surface disorders. However, our study has provided a clue to solve these riddles. The tortuosity of sub-basal nerves and the level of nerve branching appeared to be increased in the GVHD group. It is therefore conceivable that GVHD-related DED induces morphological alterations in the sub-basal nerves, which promotes severe inflammation on the ocular surface in HSCT recipients. The degree of the morphological changes is presumably an indication of the extent of inflammation that has occurred on the ocular surface. DCs, which are also referred to as Langerhans cells (LGs) in the epidermis and mucosal membranes of the ocular surface, act as main APCs on the ocular surface and play a vital role in maintaining immune homeostasis in the eyes. Additionally, they are known as key players in the initiation of immune responses [35][36][37][38] . Therefore, we evaluated the behaviour of DCs in the central cornea and limbal epithelia in HSCT recipients with or without GVHD-related DED using IVCM. The literature suggests that IVCM has several advantages that enable uncomplicated detection and quantification of DCs in the central cornea and limbal epithelia 11,39 . However, IVCM allows us to detect other kinds of cells in addition to DCs and can only provide information on the morphology and location of cells. Therefore, identifying DCs only based on morphology can be scientifically inadequate. Macrophages, for instance, may also have a dendritic shape 40,41 . However, because they are reported to be found exclusively in the corneal stroma 40, 41 , we therefore identified cells with dendrites in epithelia as DCs. In this clinical investigation, the central corneas in the GVHD group had a higher density of DCs (119.29 ± 79.78 cells/mm 2 ) than those in the non-GVHD group (37.57 ± 21.72 cells/mm 2 ). One of the novelties of our clinical study was that we compared the density of DCs in HSCT recipients with and without GVHD. Consistent with our finding, Kheirkhah et al. previously reported a similar density of DCs in the central corneas of GVHD patients (146.71 ± 93.36 cells/mm 2 ) 14 , supporting the accuracy and dependability of our clinical examination. Our results also indicated that the density of DCs in the limbal epithelia in HSCT recipients with GVHD was considerably greater than that in those without GVHD (116.50 ± 52.61 vs. 55.57 ± 34.24 cells/mm 2 ). These outcomes are presumably indicative of the higher degree of immune activation and inflammation on the ocular surface in GVHD patients and suggest that the changes in DCs in the central cornea and limbus in HSCT recipients with DED are induced by ocular GVHD instead of the conditioning regimen prior to HSCT. According to recent studies, DED is presumed to be one of the factors that can increase the density of DCs in corneal epithelia 12,42,43 . In regard to immune-mediated diseases, several pieces of evidence suggest that the density of corneal epithelial DCs is augmented in patients with autoimmune diseases such as Steven-Johnson syndrome 44 , rheumatoid arthritis 26,45 , systemic lupus erythematosus 46 , and ankylosing spondylitis 47 . Therefore, (upper raw, a,b,e,f,i,j) and non-GVHD patients (lower raw, c,d,g,h,k,l). In the central cornea, the density of epithelial DCs in GVHD patients (a,b) and in non-GVHD patients (c,d). The density of GICs (blue arrowhead) in the GVHD group (e,f) and in the non-GVHD (g,h). In limbal epithelia, the density of DCs in GVHD patients (i,j) and non-GVHD patients (k,l). Table 4. Correlations between IVCM parameters and clinical variables. *p < 0.05; **p < 0.01; ***p < 0.001. r: correlation coefficient. it is plausible that the interaction between immune cells causes many inflammatory cells to infiltrate into the corneal epithelia affected by cGVHD. Furthermore, our literature search revealed that the density of corneal DCs in DED patients with SS 33,42,48,49 and GVHD 12 is higher than that in DED patients without these immunological disorders. The development of these immune-mediated diseases is also presumed to contribute detrimentally to the changes in the behaviour of DCs in the corneal epithelia. However, Dana and colleagues reported different results. They recruited DED patients with chronic GVHD (n = 33) and DED patients who did not receive HSCT and were therefore without chronic GVHD (n = 21). Then, they compared IVCM variables in the two groups. As the clinical severity of DED in the GVHD group was significantly greater than that in the control group, they made adjustments to the clinical severity of DED in order to prevent the difference in the severity of DED between the two groups from affecting IVCM variables. After the adjustments, there was no significant difference in IVCM variables between DED patients with GVHD and those who did not undergo HSCT and were therefore free of GVHD 14 . Because of the close link between the propagation of GVHD and the progression of DED, a painstaking investigation will be required to distinguish the separate effects of these two events on the changes observed using IVCM. However, as indicated by this clinical investigation, the density of corneal epithelial DCs in HSCT recipients with GVHD was greater than those without GVHD, and the IVCM parameters in the cornea were associated with various clinical variables. Therefore, the density of corneal DCs could be a reliable indicator of the severity of ocular surface damage and inflammation in GVHD patients. Additional studies are needed to corroborate this notion. We also focused on GICs in the basal epithelial and sub-basal layers, which have been presumed to play a role in inflammatory ocular diseases 11 and DED 42 . The density of GICs in the central cornea in the GVHD group was significantly higher than that in the non-GVHD group. The increased infiltration of GICs may be indicative of a more active immune response. However, it is still unknown why GICs infiltrate into the ocular surface and how they function in that capacity. It is challenging to unravel the phenotype of these types of cells through morphological investigation using IVCM because they do not have characteristic shapes. Continuing endeavours are needed to uncover the phenotype and function of GICs and their role in the pathophysiology of GVHD-related DED. This clinical study had several limitations. First, the relatively small number of samples in this study did not allow us to comprehensively investigate the difference in IVCM variables between the different severities of GVHD-related DED. We were also unable to subdivide the 2 main groups (GVHD and non-GVHD groups) according to other clinical aspects such as whether HSCT recipients did or did not undergo TBI. Second, although the shapes of DCs presumably reflect the state of activation, maturation and antigen-presenting capability of DCs, we did not explore certain morphological characteristics of DCs, such as their size, the number of dendrites and the dendritic field. More work is required to determine the nature of the correlation between the morphological alterations of DCs and the extent of inflammation induced by GVHD-related DED. In addition, only the inferior limbal epithelia in HSCT recipients with and without GVHD were assessed in this study. Ideally, four quadrants of the limbus should be evaluated in order to comprehensively examine the limbus. However, prolonged examinations are needed to achieve this and would require placing the patients, especially those with GVHD, in very uncomfortable situations. Clinical characteristics such as age range, gender diversity, primary diseases, history of TBI and types of HSCT in the GVHD group were not exactly the same as those in the non-GVHD group. In reality, it is extremely difficult to match the GVHD and non-GVHD groups with respect to the above aspects, since the recruitment of HSCT recipients, especially those without GVHD-related DED, is highly demanding. In summary, the density of DCs in both the central cornea and limbal epithelia in HSCT recipients with GVHD-related DED was significantly higher compared with those without these conditions. In addition, the sub-basal corneal nerves in HSCT recipients with GVHD-related DED appeared to be more vulnerable to morphological changes in comparison with those in HSCT recipients without GVHD-related DED. All in all, these phenomena discovered through IVCM are presumed to be associated with the status of ocular GVHD-elicited inflammation in the eyes. Our clinical study has the great potential to promote the development of diagnostic technologies to monitor the state of inflammation on the ocular surface in HSCT recipients. Methods Patients. This study was approved by the Institutional Review Board and Ethics Committee of Keio Hospital (IRB No.: 20130013) and complied with the tenets of the Declaration of Helsinki. The individual participants were notified of all possible consequences of this study, and written informed consent was obtained from each of the subjects. The GVHD group had 12 HSCT recipients with GVHD-related DED who survived for more than 100 days after the transplantation, and the non-GVHD group had 10 equivalents without GVHD-related DED. The patients were 20 years or older when they were recruited. The subjects in the GVHD group met the Japanese diagnostic criteria for dry eye. The diagnostic criteria are as follows: (1) disturbance of the tear film (Schirmer test ≤ 5 mm or tear film breakup time ≤ 5 seconds); (2) conjunctivocorneal epithelial damage (fluorescein staining score ≥ 3 points or rose bengal staining score ≥ 3 points); and (3) dry eye symptomatology. Patients were diagnosed with definite dry eye only if they met all 3 diagnostic criteria 50 . Exclusion criteria were: (1) inability to undergo an IVCM examination due to poor physical and/or mental states, (2) written informed consent was not obtained, (3) other ophthalmic diseases such as cataract, retinal detachment and diabetic retinopathy, (4) ocular surgery performed within 6 months prior to the start of this clinical study, (5) current or previous use of contact lenses, (6) signs and symptoms of DED before HSCT, and (7) presence of fundus hemorrhage or uveitis. In vivo confocal microscopy. Central cornea and inferior limbal epithelia in all patients were examined using in vivo confocal laser scanning microscopy (the Rostock Corneal Software, ver. 1.2, Heidelberg Retina Tomograph II [RCM/HRT II]; Heidelberg Engineering GmbH). At the beginning of examination, the proper volume of the topical anesthetic 0.4% oxybuprocaine was administered to the conjunctival sac. A drop of gel (Bausch & Lomb, GmbH, Berlin, Germany), which served as a coupling medium, was placed on the anterior surface of a polymethylmethacrylate (PMMA) cap (Tomo-cap; Heidelberg Engineering GmbH). The individual subjects were instructed to place their heads in a head holder and continue looking forward (during examination of the central cornea) or looking up (during examination of the inferior limbal epithelia). Then, the examiner placed an applanate PMMA cap onto the area of interest, and the state of the area covered by the PMMA cap was examined through real-time video images displayed on a computer screen. The target areas and reference layers were determined by adjusting a controller and a zoom gear. Once all required adjustments were made, the examiner took images by pressing a foot pedal. Under the "sequence model", this machine provides automatic collection and storage of 100 images (400 × 400 μm in size) per sequence scan. While operating this machine, the examiner adjusted the focal plane to ensure that all target layers of each target area were clearly recorded. To avoid overlap, three non-overlapping scans in each area were performed. Image analysis. Three representative images of DCs, GICs and sub-basal nerves in each area were selected for image analysis. These images were all obtained in or behind the basal epithelial layer and in front of Bowman's layer. All the images obtained were analyzed using ImageJ software (available at: www.imagescience.org/meijering/software). DCs, which have branching dentritic structures, were identified morphologically, and their cellular images were bright (Fig. 1). This description of DCs was presented in a previous publication 11 . GICs possessed round or oval structures with high reflectivity, and their diameter ranged from 5 μm to 15 μm (Fig. 1). The density of DCs and GICs was defined as the number of cells per mm 2 . The number of cells was counted manually. The analysis of the sub-basal nerves was performed utilizing NeuronJ, a plug-in program for ImageJ, which can semi-automatically trace nerve fibers and measure their total length. The density of sub-basal nerves was defined as the total length of sub-basal nerve fibers per mm 2 . The tortuosity of the sub-basal nerves was graded from 0 to 4, in accordance with a previously published protocol 53 . The branching of the sub-basal nerves was defined as the number of branching points in sub-basal nerves per mm 2 . Three different subareas in each area of interest were photographed and analyzed, and the individual clinical parameters described above were subsequently obtained by averaging the values arising from the analysis. All the images acquired were evaluated by two masked observers (Y.O. and J.H.), and the average of the scores given by the two observers was shown as the value of each image. Statistical analysis. Statistical analysis was performed using SPSS software 22 (SPSS, Inc., Chicago, IL, USA). Continuous variables were reported as the mean ± standard deviation (SD). Categorical variables were presented as frequency or percentage. The Shapiro-Wilk test and the Levene test were conducted to investigate the distribution of data and homogeneity of variances in the data, respectively. Student's t-test was used to compare the means of the data with a normal distribution and homogeneity of variances in the GVHD and non-GVHD groups. The Welch separate variance estimation t-test was conducted for the data with a normal distribution but without homogeneity of variances. The Mann-Whitney U test was used for the comparison of data with skewed distribution. The χ 2 test was used for the comparison of qualitative variables. A correlation between IVCM variables and clinical variables was assessed using Spearman's rank-order correlation test. All p values were 2-sided and considered statistically significant when less than 0.05.
v3-fos-license
2020-08-20T10:11:40.406Z
2020-09-29T00:00:00.000
221986063
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://nutritionandmetabolism.biomedcentral.com/track/pdf/10.1186/s12986-020-00501-8", "pdf_hash": "89d3bb3190631a18e60725381f8cbe33ce324429", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41717", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1fbcf8b324c15e32411a3c501eca2c27bf13abdd", "year": 2020 }
pes2o/s2orc
Influence of multiple apolipoprotein A-I and B genetic variations on insulin resistance and metabolic syndrome in obstructive sleep apnea Background The relationships between apolipoprotein A-I (APOA-I), apolipoprotein B (APOB) with insulin resistance, metabolic syndrome (MetS) are unclear in OSA. We aimed to evaluate whether the multiple single nucleotide polymorphism (SNP) variants of APOA-I and APOB exert a collaborative effect on insulin resistance and MetS in OSA. Methods Initially, 12 APOA-I SNPs and 30 APOB SNPs in 5259 subjects were examined. After strict screening, four APOA-I SNPs and five APOB SNPs in 4007 participants were included. For each participant, the genetic risk score (GRS) was calculated based on the cumulative effect of multiple genetic variants of APOA-I and APOB. Logistic regression analyses were used to evaluate the relationships between APOA-I/APOB genetic polymorphisms, insulin resistance, and MetS in OSA. Results Serum APOB levels increased the risk of insulin resistance and MetS adjusting for age, gender and BMI [odds ratio (OR = 3.168, P < 0.001; OR = 6.098, P < 0.001, respectively]. APOA-I GRS decreased the risk of insulin resistance and MetS after adjustments (OR = 0.917, P = 0.001; OR = 0.870, P < 0.001, respectively). APOB GRS had no association with insulin resistance (OR = 1.364, P = 0.610), and had weak association with MetS after adjustments (OR = 1.072, P = 0.042). In addition, individuals in the top quintile of the APOA-I genetic score distribution had a lower risk of insulin resistance and MetS after adjustments (OR = 0.761, P = 0.007; OR = 0.637, P < 0.001, respectively). Conclusions In patients with OSA, cumulative effects of APOA-I genetic variations decreased the risk of insulin resistance and MetS, whereas multiple APOB genetic variations had no associations with insulin resistance and weak association with MetS. Apolipoprotein A-I (APOA-I) and apolipoprotein B (APOB) are two main lipoproteins. APOA-I is a major apolipoprotein in high-density lipoprotein cholesterol (HDL-C) and manifests antiatherogenic properties [5]. APOB is present in very low-density lipoprotein (VLDL), intermediate-density lipoprotein, and lowdensity lipoprotein cholesterol (LDL-C) and may enhance atherothrombosis [5,6]. Many clinical trials have revealed that APOA-I and APOB are independently associated with insulin resistance and MetS [7][8][9]. OSA is believed to be associated with APOA-I and APOB (i.e., in OSA, all sleep variables are positively correlated with the APOB/APOA-I ratio) [9]. Eight weeks of continuous positive airway pressure (CPAP) treatment can significantly decrease the APOB level [10]. Our previous study demonstrated that APOB/ APOA-I increased the risk of insulin resistance, insulin resistance play a mediator between OSA and APOB/ APOA-I [11]. However, whether APOA-I and APOB are independently associated with insulin resistance and MetS in OSA remains uncertain. Both genetic and environmental factors play an important role in insulin resistance and MetS [12][13][14][15]. Although significant evidence links OSA to insulin resistance and MetS [3,16], little is known about the roles of the genetic factors of lipoproteins involved in insulin resistance and MetS in OSA. Particularly, no current data on potential links between susceptibility genes for APOA-I and APOB and OSA-related insulin resistance and MetS are available. Ordinarily, there is a tiny effect size of one single nucleotide polymorphism (SNP) to increase the risk of disease in a large number of variants. However, when the cumulative effect of a substantial fraction of variations reaches a certain threshold, the risk of disease is significantly increased [17]. Previous studies have used a cumulative effect model (genetic risk score, GRS) to identify risk factors of a certain disease. For example, total cholesterol (TC), total triglyceride (TG), HDL-C, and LDL-C genetic variants are associated with cardiovascular disease [18]; QT interval (measure of the time between the start of the Q wave and the end of the T wave in the heart's electrical cycle) duration genetic variants are associated with drug-induced QT prolongation [19]; and atrial fibrillation genetic variants are associated with future atrial fibrillation and stroke [20]. However, the relationships between the cumulative effects of multiple genetic variants of APOA-I, APOB, insulin resistance, and MetS in OSA remain unclear. In this study, we pooled multiple genetic variants of APOA-I and APOB to investigate the effects of APOA-I and APOB genotype on insulin resistance and MetS in the large-scale, clinical cohort study on OSA. Subjects Subjects who were initially suspected of having OSA were consecutively enrolled to participate in the ongoing Shanghai Sleep Heath Study (SSHS) (previously described in [21]). Subjects with non-OSA and moderate-to-severe OSA were chosen from the SSHS for an additional genomic study. Next, subjects that met the following inclusion and exclusion criteria were selected. Inclusion criteria were: older than 18 years of age without a return visit and previous treatment. Exclusion criteria were: (1) missing APOA-I and APOB data, (2) missing data on more than 15% of total SNPs, (3) regular use of lipid lowering drugs, (4) presence of a systemic disease (i.e., chronic pulmonary, renal, or hepatic failure), cancer, psychiatric disease, hyperparathyroidism, hypoparathyroidism, or polycystic ovarian syndrome; (5) other sleep disorders, such as restless leg syndrome or narcolepsy; (6) cardiovascular disease (i.e., angina, myocardial infarction, heart arrhythmia, or valvular heart disease); and (7) missing systolic blood pressure (SBP), TC, HDL-C, and fasting plasma glucose (FPG) data. Ultimately, 4007 participants were analyzed in this study that was approved by the Institutional Ethics Committee of Shanghai Jiao Tong University Affiliated Sixth People's Hospital. Written informed consent was obtained from all subjects. Anthropometric and biochemical measurements Waist circumference (WC) was measured at the middle of the lowest costal margin and iliac crest. Hip circumference (HC) was measured at the widest part of the buttocks. Neck circumference (NC) was measured at the level of the cricothyroid membrane. WC, HC, and NC were measured by trained investigators following standard protocols. Body mass index (BMI) was calculated as weight (in kilograms) divided by height squared (in meters). The waist-hip ratio (WHR) was calculated as WC divided by HC (in centimeters). SBP and diastolic blood pressure (DBP) were measured in triplicate after at least a 10-min rest using an automated electronic device (Omron Model HEM-752 Fuzzy, Omron Company), and the average value of the three readings was used for analysis. A fasting blood sample was obtained the morning after polysomnographic monitoring. FPG, TC, TG, HDL-C, LDL-C, APOA-I, APOB, and apolipoprotein E were measured using an autoanalyzer (H-7600; Hitachi, Tokyo, Japan) in the hospital laboratory. Serum fasting insulin was measured using immunoassay. Homeostasis model assessment of insulin resistance (HOMA-IR) was calculated as fasting insulin (μIU/mL) × FPG (mmol/L)/22.5. HOMA-IR ≥ 2.5 was defined as insulin resistance [22]. Abnormal APOA-I and APOB were defined as serum levels < 1.20 and > 1.10 g/L, respectively, according to diagnostic criteria of the Joint Committee for Developing Chinese Guidelines on the Prevention and Treatment of Dyslipidemia in Adults [23]. A person had metabolic syndrome if presenting three or more of the following conditions [24]: (1) TG ≥ 150 mg/dL; (2) HDL-C < 40 mg/dL in men or < 50 mg/dL in women; (3) SBP ≥ 130 mmHg, DBP ≥ 85 mmHg, or diagnosed hypertension; (4) fasting glucose ≥ 100 mg/dL or drug treatment for type 2 diabetes; and (5) WC ≥ 90 cm in men or ≥ 80 cm in women. Polysomnographic evaluation and OSA definition Overnight standard polysomnography (PSG, Alice 4 or 5; Respironics Inc., Pittsburgh, PA, USA) was used to obtain objective sleep parameters. An electroencephalogram, bilateral electroculogram, chin electromyogram, electrocardiogram, nasal and oral airflow, finger pulse oximetry, chest and abdominal movements, and body posture were recorded during sleep. Apnea was defined as cessation of airflow for ≥ 10 s, and hypopnea was defined as ≥ 50% reduction in airflow accompanied with ≥ 3% decrease in oxygen desaturation according to the 2007 American Academic Sleep Medicine criteria [25]. The severity of OSA was determined by the apnea-hypopnea index (AHI), and non-OSA, mild, moderate, and severe were defined as AHI < 5, 5-15, 15-30, and ≥ 30 per hour, respectively. The oxygen desaturation index was calculated as the number of episodes of oxygen desaturation ≥ 3% per hour during sleep. The micro-arousal index was calculated as the number of arousals per hour of sleep. For GRS construction, we assumed an additive genetic model for each variant [28]. The weighted computation of APOA and APOB was calculated by multiplying each subject's risk allele score (0, 1, or 2) by the SNP's β coefficient from our data; values for each locus were then summed. Statistical analysis Statistical analyses were performed using SPSS software (version 19.0, IBM Corp., Armonk, NY, USA). Continuous data are presented as the mean ± standard deviation (SD) for normalized variables and as the median (interquartile range) for skewed variables. Categorical variables are shown in proportions. The Hardy-Weinberg equilibrium test was performed for each variant before association analysis using PLINK (https ://zzz.bwh.harva rd.edu/plink /data.shtml ). LD was performed at https :// archi ve.broad insti tute.org/mpg/snap/ldsea rchpw .php. Differences in baseline characteristics among groups were examined using the least-significant difference test, one-way analysis of variance, the chi-squared test, the independent samples t-test, or the Mann-Whitney U test according to the distribution characteristics of the data. Linear regressions were used to evaluate the associations between SNPs and serum APOA, APOB levels. We used logistic regression models to assess the OR of individuals in the top quintiles of the APOA and APOB GRS distributions with reference to individuals in the lowest quintile to examine the risk of moderate-to-severe OSA, insulin resistance, and MetS, both unadjusted and adjusted for age, gender, and BMI. Linear regression was used to evaluate the associations between GRS and clinical characteristics. Stepwise multivariate linear regression analysis was used to predict HOMA-IR. A two-tailed P value < 0.05 was considered statistically significant. Baseline characteristics In total, 4007 eligible subjects (596 non-OSA, and 3411 moderate-to-severe OSA) were enrolled in this study (see flow chart in Fig. 1). Of the 4007 participants enrolled, 596 were non-OSA, 831 were moderate OSA and 2580 were severe OSA. Of the non-OSA subjects, the median age was 34 (range 29-43), the median HOMA-IR was 1.172 (range 1.11-2.55), median serum APOA-I was 1.092 (range 0.092-1.15) g/L, medium serum APOB was 0.77 (range 0.65-0.89) g/L, the percentage of insulin resistance was 26.8%, the percentage of metabolic syndrome was 27.2%, the median AHI value was 2 (range 0.8-3.4) events/h. Compared with non-OSA, patients with OSA were older and had higher serum concentrations of glucose, insulin, sleep parameters and ratio of smoking, drinking, prevalence of insulin resistance and percentage of MetS. OSA patients had higher levels of anthropometric parameter, such as SBP, DBP, BMI, NC, WC, HC, WHR. With the exception of serum APOA-I, all biochemistry parameters, demographic parameters and sleep parameters were also significantly different among the groups (Table 1). There were more subjects with insulin resistance and MetS in OSA group compared to non-OSA (P < 0.001). The percentages of insulin resistance in non-OSA, moderate, and severe OSA were 26.8%, 46.8%, and 63.4%, respectively. The percentages of MetS in non-OSA, moderate, and severe OSA were 27.2%, 50.2%, and 64.3%, respectively. We also assessed the clinical characteristics of the participants in the top quintiles of the APOA GRS and APOB GRS compared with those in the bottom fifth quintiles (Additional file 1: Tables S1 and S2, respectively). The basic characteristics of insulin, insulin resistance, and TG were lower in the highest quintile of APOA GRS than in the lowest quintile of APOA GRS, whereas HDL-C, LDL-C, APOA, and APOA/APOB were higher (all P < 0.05; Additional file 1: Table S1). As expected, TC, APOB, and LDL-C levels were higher in the highest quintile of APOB GRS than in the lowest quintile of APOB GRS (all P < 0.05, Additional file 1: Table S2). Associations of serum APOA-I and APOB and their GRS with insulin resistance and MetS risks The associations between each SNP of APOA-I and APOB with insulin resistance and MetS are listed in Additional file 1: Table S3. APOA-I SNPs rs9804646 and rs888246 were associated with insulin resistance (OR = 0.856, 95% confidence interval [CI] 0.756-0.968, P = 0.013; OR = 1.340, 95% CI 1.069-1.680, P = 0.011) after adjustment. APOA-I SNPs rs964184, rs9804646, Serum APOA-I levels remained to decrease the risk of MetS after adjustment (OR = 0.09, P < 0.001). Serum APOB levels increased the risk of insulin resistance and MetS (OR = 0.573, P < 0.001; OR = 0.131, P < 0.001), which remained after adjusting for age, gender, and BMI (OR = 3.168, P < 0.001; OR = 6.098, P < 0.001). APOA-I GRS decreased insulin resistance (OR = 0.923, 95% CI 0.880-0.968, P < 0.001) and MetS (OR = 0.886, 95% CI 0.845-0.929, P < 0.001) significantly, which remained after adjusting for age, gender, and BMI (all P < 0.001). APOB GRS was not associated with insulin resistance We also stratified APOA-I and APOB GRS into quintiles. When compared with the bottom quintile, subjects in the top quintile of the APOA-I GRS group had a lower risk of insulin resistance and MetS (Table 4 Table S4), even after adjustment. APOB GRS was associated with elevated TC, LDL-C, APOB, and APOB/APOA-I levels (all P < 0.001, Additional file 1: Table S5). Discussion Our study was the first to comprehensively examine the roles of APOA-I and APOB levels and their genetic variations on insulin resistance, MetS, and OSA using current large-scale sampling and strict data acquisition. OSA Patients had higher TC, TG, LDL-C APOB levels and APOB/APOA-I ratios than those without OSA. Subjects with OSA were more obese and had higher levels of glucose, SBP, DBP, and insulin resistance than those without OSA. Not only did serum APOA-I and APOB levels correlate with insulin resistance and MetS, but cumulative genetic variants of APOA-I and APOB also exhibited effects on insulin resistance and MetS in OSA. Individuals in the top quintile of APOA-I genetic score distributions tended to have a lower risk of insulin resistance and MetS. Obesity is one of the most important risk factor of OSA, and the major contributing factor to the development of insulin resistance and MetS. Thus it is most likely that OSA and insulin resistance are parallel outcomes of obesity while they might not have a direct causality relationship with each other. For the past years, various experiments have been able to better unravel complex mechanisms via in-vitro and animal models, prospective observational and treatment studies revealed that the association between metabolic disease (including insulin resistance, obesity, metabolic syndrome) and OSA is bi-directional and feedforward [4,29]. The interrelationships among OSA, insulin resistance, MetS, dyslipidemia, obesity was multifaceted and complicated. The manuscript aimed to evaluate the influence of lipids multiple genetic variants (APOA and APOB) on insulin resistance and MetS in OSA patients using obesity as a confounding factor. It has reported that lower APOA-I level was associated with insulin resistance in patients with impaired glucose tolerance [7] and a higher prevalence of MetS [30]. The APOB level predicted the incidence of MetS in a 5-year follow-up study [8]. The relationship between APOA-I, APOB level and metabolic disease in OSA had been rarely studied. Our study suggests serum APOA-I, APOB level associated insulin resistance and MetS in OSA, and APOA-I and APOB were involved in metabolism and probably further increase cardiovascular disease risk. APOA-I and APOB levels are not only influenced by environmental factors, such as diet and exercise, but are also subject to genetic regulation [31,32]. Thus, APOA-I and APOB genetic variations may have a causal effect on insulin resistance and MetS. Previous studies have focused on the relationship between APOA-I and APOB genetic variations and serum lipid traits [31,33,34]. It had been reported that there was an ethnic and gender specific association between the APOA rs964184 with lower HDL-C, higher TG and lowers APOA levels in Chinese populations [31]. The sample was small, and the results were lack of the consideration of the impact of other SNP and diseases on serum lipids levels. APOA rs964184 was also associated with serum TG levels [35,36], metabolic syndrome [35], coronary heart disease [37], and hemorrhagic stroke risk [38]. The APOA rs964184 study on OSA and insulin resistance has not been reported yet. Our results revealed that rs964184 had negative associations with serum APOA levels but had no associations with insulin resistance in a large cross-sectional study. APOB rs1042031(EcoRI) has been widely used to study coronary heart disease (CHD) [39] and Steroid-Induced Osteonecrosis of the femoral head (SONFH) [40]. They found APOB rs1042031 confers a moderate risk for CHD [39] and increase the SONFH risk with moderate levels of evidence. APOB rs693 [41], rs2854725 [42] and rs1367117 [43,44] were associated with serum APOB levels, further associated with familial hypercholesterolemia [45] and heart-related traits [46], predicted the risk of CHD [42], maternally-derived effect on BMI [43].There were fewer studies about APOA SNP rs9804646, rs10047462, rs888246 and APOB rs12713956. Data on APOA-I and APOB genetic polymorphisms in insulin resistance and MetS are still lacking. Until now, the effects of multiple genetic variants about APOA and APOB have not been studied yet, especially in a specific disease population. Our data indicate that genetic variants of APOA-I and APOB SNPs play different roles in metabolic disorders, such as APOA-I rs9804646 decreased the risk of insulin resistance and MetS, but APOA-I rs888246 increased the risk of insulin resistance and MetS. There is a tiny effect of one SNP for the disease development. Thus, we use the GRS model to study the effect of multiple genetic variations. The GRS is a convenient way to summarize a number of genetic variants associated with an individual's genotype. The GRS does not change over time and holds the advantage that it can be used to assess the risk of metabolic dysfunction at any age from birth on. The GRS is always used in Mendelian randomization analysis to estimate the causal effect of a risk factor on an outcome [47]. This facilitates the use of genetic information, either alone or in combination, with other factors in clinical and research settings. Prospective studies have used the GRS to assess the cumulative effects of TC, TG, HDL-C and LDL-C related genetic variations on blood lipid levels, coronary events, and cardiovascular disease [18]. With the increasing availability of multiple genetic variants associated with lipids, it is becoming increasingly common to study associations with allele scores. Our study was the first to screen 42 genetic variants and ultimately combine four APOA-I SNPs and five APOB SNPs (using the GRS model) to comprehensively examine the genetic roles of APOA-I and APOB in insulin resistance and MetS in OSA. In our study cumulative APOA-I genetic variations (APOA-I GRS) decreased the risk of insulin resistance and MetS, whereas cumulative APOB genetic variations (APOB GRS) increased the risk of MetS in OSA. The APOA-I gene is believed to be stimulated by insulin through SP-1 binding elements [48], and genetic variations of APOA-I may affect the binding site. Gene-diet interactions may also contribute to MetS [49,50]. Because APOB SNPs are related to lipids [51,52], the GRS of APOB may be beyond the interval of the association with insulin resistance. Future clinical trials as well as rodent studies should be designed to explore the potential mechanisms involved. Both genetic and environmental factors are important contributors to insulin resistance. Our data indicate that environmental factors, such as BMI and AHI contribute more to HOMA-IR than do genetic variations. Genetic and environmental correlations for the same disease are complex [53]. Risk factors of OSA include obesity, age, male sex, and genetic background [54], and obesity is considered the most important risk factor [54,55]. Therefore, in OSA-related insulin resistance, there should be more emphasis on environmental interventions than on genetic breakthroughs. Our study aimed to obtain high quality results by using a large sample size, laboratory-based PSG, unified serological examination, and standard questionnaires. In addition, we used multiple SNPs in a GRS model to evaluate the prediction of individual risk for metabolic disorders. However, several limitations of the present study should be noted. First, although we aimed to collect a sufficient amount of SNPs, several APOA-I and APOB SNPs may have been omitted. Furthermore, more complex genetic variants, including indels and structural variants, were not considered. The effects of SNP-SNP and gene-environment interactions were not modeled. Second, although we made efforts to minimize limitations by building our large sample population using subjects with relatively homogeneous lifestyles and ethnicity and adjusted for common confounding factors, such as age, sex, and BMI, but other more sophisticated environmental factors, such as economic status, exercise, and lifestyle, were not considered in this study. Third, we quantified the effects of multiple genetic variations, further studies on the mechanisms are still needed. The study was cross-sectional rather than prospective and community-based design, and could not provide the causative evidence. Future clinical trials, as well as animal studies of genetics-derived OSA, will better shed light on these complex mechanisms. Conclusions In conclusion, both the protective effects of multiple APOA-I genetic variants and damaging effects of APOB genetic variations impact the biomarkers of OSA patients.Obviously, the different cumulative effects of genes increase the complexity of metabolic disorders in OSA. Additional file 1: Table S1. Basic characteristics of the top vs bottom quintile APOA GRS; Table S2. Basic characteristics of the top vs bottom quintile APOB GRS; Table S3. The associations between SNPs with insulin resistance and MetS. Table S4. Linear regression of APOA GRS with clinical characteristics; Table S5. Linear regression of APOB GRS with clinical characteristics; Table S6. The stepwise multivariate linear regression model for predicting HOMA-IR.
v3-fos-license
2024-07-10T15:11:23.990Z
2024-07-01T00:00:00.000
271081971
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "ef936d69463311f9560ae8d7a31ce1bd58cabac5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41719", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b7c29f4c7fd84ae9584389ed8e67f9e556bd7521", "year": 2024 }
pes2o/s2orc
Sarcomatoid Mesothelioma With New Pancreatic Lesions Presenting As Acute Pancreatitis: A Case Report Sarcomatoid mesothelioma is a rare, aggressive malignancy that usually follows asbestos exposure. It is the least common subtype of mesotheliomas, following epithelial and biphasic subtypes. Pleural mesothelioma can metastasize, with the liver, kidneys, adrenal glands, and opposite lungs being the most commonly reported sites for metastasis. Metastasis of the pancreas is extremely rare, which is why the authors of this case report intend to present the case of a 78-year-old male who was found to have acute pancreatitis, most likely secondary to metastatic lesions. Introduction Sarcomatoid cells are typically present in the bones, nerves, and connective tissues.Sarcomatoid mesothelioma is the rarest subtype of malignant mesothelioma with a median survival of four months with surgical treatment and 15 months with immunotherapy.It usually occurs following exposure to asbestos and is characterized by spindle cell proliferation under microscopy.Pleural mesothelioma most often metastasizes to the liver, spleen, kidneys, and adrenal glands.We intend to present a case of a 78-year-old male with a history of pleural sarcomatoid mesothelioma who presented to the emergency department with abdominal pain and was found to have acute pancreatitis likely secondary to metastatic lesions found in the neck and tail of the pancreas. Case Presentation The patient is a 78-year-old male with a past medical history of diabetes mellitus, hyperlipidemia, benign prostatic hypertrophy, hypertension, allergic rhinitis, and remote history of possible occupational asbestos exposure, who was recently diagnosed with metastatic sarcomatoid mesothelioma (Figure 1), who came to the emergency department complaining of a three-day history of abdominal pain, nausea, and vomiting.The patient reported that he had progressively worsening food intolerance with epigastric abdominal discomfort.He had multiple non-bloody and nonbilious vomiting episodes the day prior to the presentation.At home, the patient was taking acetaminophen 325mg three times a day, an albuterol inhaler every six hours, amlodipine 10mg once a day, docusate/sennosides 50mg/8.6mgonce a day, fluticasone/salmeterol 250/50 twice a day, glipizide 5mg once a day, losartan 100mg once a day, metoprolol tartrate 25mg twice a day, oxycodone 10mg three times a day, pantoprazole 40mg once a day, and rosuvastatin 40mg once a day. In the emergency department, the patient was afebrile, pulse of 91 beats per minute, respiratory rate of 18 breaths per minute, and blood pressure of 158/80 mmHg.His laboratory investigation was significant for the following values in Table 1.The patient had a computed tomography (CT) of the abdomen and pelvis which showed possible mild pancreatitis associated with new masses in the pancreatic neck and tail (Figure 2).The patient also had worsening metastatic disease involving the peritoneum, paraspinal muscle, and osseous involvement. FIGURE 2: CT scan of the abdomen showing multiple new pancreatic metastases (arrow) The patient was admitted to the hospital for the management of acute pancreatitis.He was initially placed on no diet and slowly progressed as tolerated, started on intravenous fluid replacement, and a pain control regimen.The patient slowly improved and was discharged home with plans to follow up with his oncologist to start the planned regimen of nivolumab and ipilimumab once acute pancreatitis resolved. Discussion Mesothelial tumors are divided into benign or preinvasive and mesotheliomas.The benign subgroup includes mesothelioma in situ, well-differentiated papillary mesothelial tumors, and adenomatoid tumors.Invasive mesothelioma is divided histologically into epithelial, biphasic, and sarcomatoid [1].Epithelial subtype is the most common invasive mesothelioma subtype comprising 60% of cases, followed by biphasic which comprises 20%, and lastly, the sarcomatoid subtype is the rarest [2].Identifying the subtype of mesothelioma is important since it impacts treatment plans and prognosis discussions [3].The median survival in patients diagnosed with sarcomatoid mesothelioma who undergo surgical treatment is 4 months, compared to 19 months and 12 months in epithelioid and biphasic subtypes [3].In a recent study, it was shown that the median survival for sarcomatoid mesothelioma could reach as high as 15 months with immunotherapy [2].The worse prognosis of the sarcomatoid subtype is thought to be due to the subtype's ability to invade the surrounding tissue, rapid growth, inconsistent expression of tumor markers, and fibrous nature [2,4]. The primary cause of sarcomatoid mesothelioma is thought to be asbestos exposure [2].Cancer can arise after 20 years to 60 years following exposure [2].Presenting symptoms vary but could include fatigue, anemia, anorexia, and similar to our patient, chest pain, cough, and hemoptysis [2].It is defined as spindle cell proliferation in fascicles or haphazard patterns invading the lung parenchyma or adipose tissue [1].Investigation studies include CDKN2A, MTAP, BAP1, cytokeratins, mesothelial markers, FLI1, CD31, ERG, CD34, STAT6, myogenin, S-100 protein, melan A, HMB45, SOX10 [1].Pleural mesothelioma has the ability to metastasize.In fact, a study in 2012 showed more than half the patients had distant metastases with the most common site being liver, followed by adrenal glands, kidneys, and the opposite lung [2].However, metastasis to the pancreas is extremely rare [5]. There is a chance of diagnostic inaccuracy for mesotheliomas as there are multiple dyes useful for immunohistochemistry, the combination of which is unique for different mesotheliomas, and the interpretation of these studies is subjective.Thus, it is recommended that a team of pathologists with proven experience in diagnosing mesothelioma confirm the diagnosis in cases of diagnostic uncertainty [6][7]. Conclusions Our patient improved after the management of pancreatitis per the guidelines.He was later discharged and was planned to start immunotherapy outpatient.The authors of this case report intended to present a case highlighting the pancreas as a potential metastasis location for pleural sarcomatoid mesothelioma.In addition, metastatic disease involving the pancreas is an important differential diagnosis for acute pancreatitis. FIGURE 1 : FIGURE 1: PET scan showing increased FDG activity in multiple foci throughout the right pleura, mostly posteriorly (arrow) PET: positron emission tomography; FDG: fluorodeoxyglucose
v3-fos-license
2023-12-10T16:25:12.437Z
2013-02-15T00:00:00.000
266137043
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://escholarship.org/content/qt14m2d36d/qt14m2d36d.pdf?t=s5comk", "pdf_hash": "72e41973cab148953a3aa0f63607b51871a774e2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41720", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c443dcdfb1d9bc3403ad83e89fd0c151406ce470", "year": 2023 }
pes2o/s2orc
Pregnancy-adapted YEARS Algorithm: A Retrospective Analysis Introduction Pulmonary embolism (PE) is an imperative diagnosis to make given its associated morbidity. There is no current consensus in the initial workup of pregnant patients suspected of a PE. Prospective studies have been conducted in Europe using a pregnancy-adapted YEARS algorithm, which showed safe reductions in computed tomography pulmonary angiography (CTPA) imaging in pregnant patients suspected of PE. Our objective in this study was 1) to measure the potential avoidance of CTPA use in pregnant patients if the pregnancy-adapted YEARS algorithm had been applied and 2) to serve as an external validation study of the use of this algorithm in the United States. Methods This study was a single-system retrospective chart analysis. Criteria for inclusion in the cohort consisted of keywords: pregnant; older than 18; chief complaints of shortness of breath, chest pain, tachycardia, hemoptysis, deep vein thromboembolism (DVT), and D-dimer—from January 1, 2019– May 31,2022. We then analyzed this cohort retrospectively using the pregnancy-adapted YEARS algorithm, which includes clinical signs of a DVT, hemoptysis, and PE as the most likely diagnosis with a D-dimer assay. Patients within the cohort were then subdivided into two categories: aligned with the YEARS algorithm, or not aligned with the YEARS algorithm. Patients who did not receive a CTPA were analyzed for a subsequent diagnosis of a PE or DVT within 30 days. Results A total of 74 pregnant patients were included in this study. There was a PE prevalence of 2.7% (two patients). Of the 36 patients who did not require imaging by the algorithm, seven CTPA were performed. Of the patients who did not receive an initial CTPA, zero were diagnosed with PE or DVT within a 30-day follow-up. In total, 85.1% of all the patients in this study were treated in concordance with the pregnancy-adapted YEARS algorithm. Conclusion The use of the pregnancy-adapted YEARS algorithm could have resulted in decreased utilization of CTPA in the workup of PE in pregnant patients, and the algorithm showed similar reductions compared to prospective studies done in Europe. The pregnancy-adapted YEARS algorithm was also shown to be similar to the clinical rationale used by clinicians in the evaluation of pregnant patients, which indicates its potential for widespread acceptance into clinical practice. INTRODUCTION One of the challenges the emergency physician faces is the prompt diagnosis of pulmonary embolism (PE) in pregnant patients.2][3] ORIGINAL RESEARCH 5][6] The normal physiologic changes in pregnancy substantially overlap with the clinical signs and symptoms of PE, which further complicates PE workups within this population.D-dimer testing, widely used in non-pregnant patients, is controversial in pregnancy because its accuracy varies by trimester. 3,7[10] Reports show that the prevalence of PE in pregnant patients undergoing diagnostic workup in the emergency department (ED) is approximately 3.7%, whereas nonpregnant patients of childbearing age showing a PE prevalence of 6.0%. 11Diagnostic workup, such as computed tomography pulmonary angiography (CTPA) or a V/Q scan, increases costs and evaluation times.These scans expose the fetus to radiation.Analyses have shown a 121% increase in radiologic examinations in pregnant women from the years 1997-2006. 12While radiation poses potential teratogenic effects, these effects are dose-dependent and vary based on gestational age.Radiation exposures greater than 500 milligray (mGy) cause fetal damage, and exposure to less than 50 mGy has not been associated with differences in pregnancy outcomes. 13While CTPA is associated with radiation exposure of <5 mGy, given the complexities of the effects of exposure based on gestational age and other radiation exposure during the pregnancy, it is recommended that the potential benefit of the radiologic study be weighed against the radiation exposure to the fetus. 12,13Multiple criteria have been developed to aid clinicians in quickly assessing and diagnosing PE including Wells, PE rule-out criteria (PERC), and YEARS criteria.However, these criteria were originally developed excluding pregnant patients from their studies, which has resulted in a lack of consensus on PE workup in pregnant individuals. 146][17] In 2019, an international study aimed to clinically evaluate PE in pregnant patients using a pregnancy-adapted YEARS algorithm. 14Their conclusion was that a pregnancy-adapted YEARS algorithm proved viable in ruling out a PE without serious adverse consequences.The pregnancy-adapted YEARS algorithm is summarized in Figure 1. Prior prospective studies applying the pregnancy-adapted YEARS algorithm took place in Europe. 14,18Additionally, another study reviewed the prevalence of PE in North America and Europe in non-pregnant patients.The prevalence of patients tested for PE in Europe was 23% compared to 8% in North America.This study also reported both a lower rate of CTPA utilization (38% vs 60%) and a lower diagnostic yield from CTPA (13% vs 29%) in North America. 19The objective of our study was to measure the potential avoidance of CTPA in pregnant patients being evaluated for a PE if the pregnancy-adapted YEARS algorithm had been applied and to serve as an external validation study of the use of this algorithm in the US. Study Design This study was a retrospective chart analysis conducted on visits from January 1, 2019-May 31, 2022, spanning one Level I trauma center/tertiary care center and one urban community hospital in Pennsylvania.The cohort included pregnant patients ≥18 years of age who presented to the ED with chief complaints consistent with a suspected PEshortness of breath, chest pain, tachycardia, hemoptysis, and clinical signs of deep vein thromboembolism (DVT).For the robustness of the dataset our search strategy also included pregnant patients for whom a D-dimer had been ordered.We excluded patients who did not receive a D-dimer test as part of their clinical workup.We also excluded patients who were worked up for a PE outside their pregnancy period.Procedures and protocols were approved by the institutional review board. Population Health Research Capsule What do we already know about this issue?Pulmonary embolism is challenging to diagnose in pregnant patients.In European studies the pregnancy-adapted YEARS algorithm has shown promise in simplifying this diagnosis. What was the research question? We investigated the reduction in computed tomography (CT) achieved by applying the YEARS algorithm to pregnant patients in two US hospitals. What was the major finding of the study?In our 74-patient sample, use of the YEARS algorithm could have safely avoided seven CTs (19.4% reduction). How does this improve population health? Adoption of the pregnancy-adapted YEARS algorithm could safely reduce CT imaging in pregnant patients, reducing their radiation exposure and streamlining ED workup. Procedures Patients for this study were gathered by an initial search strategy that used the SlicerDicer feature in the Epic electronic health record (Epic Systems Corporation, Verona, WI).SlicerDicer is a validated tool within Epic that allows for the selection of patients given certain inclusion and exclusion data. 20Trained medical student research assistants (RA) extracted patient data via retrospective chart review.The RAs were initially blinded to the study outcome.The two senior authors (KW, DL), both board-certified emergency physicians, reviewed a random sampling of each abstractor's charts for accuracy.Each chart was then tabulated by chief complaint and subsequent findings according to the YEARS algorithm summarized in Figure 1, regardless of whether the algorithm was used in patient workup.Any questionable cases were reviewed once more by an attending physician. Clinical signs of a DVT included documented clinician suspicion of a DVT or documented unilateral or bilateral leg swelling, warmth, pain, or discoloration.Hemoptysis was deemed present if the patient reported hemoptysis during the visit, within 24 hours of a visit, or was determined by the evaluating clinician to be relevant.Pulmonary embolism as the most likely diagnosis was determined through thorough evaluation of health records.A detailed methodology of how "PE most or equally likely diagnosis" was determined is elucidated in the supplemental attachment.Any disagreement in the determination of PE as the "most or equally likely" diagnosis triggered review by a senior author and was resolved by consensus.The RAs evaluated charts independently, and ultimately all charts adjudicated as "PE most or equally likely diagnosis" were discussed by both senior authors; therefore, we did not calculate a kappa statistic.Missing historical or clinical exam findings were treated as absent. If the CTPA showed a new filling defect in any pulmonary artery, PE was assumed to be present. 21If the result of a compression ultrasonography showed noncompressibility of a proximal vein, a DVT was assumed to be present. 19atients were then further categorized as nonconcordant or concordant with the pregnancy-adapted YEARS algorithm (Figure 1). Patients who did not receive a CTPA were assessed within a 30-day follow-up period.These visits included subsequent appointments in which the previous ED visit was addressed.Further analysis at the follow-ups included workup for suspected VTE, PE, or an additional ED visit as recommended by the treating clinician.All follow-up visits were within 30 days from the initial ED encounter for PE workup.Additionally, all patients in the study completed their pregnancy in the health system. Analysis We used Excel (Microsoft Corporation, Redmond, WA) to perform fundamental statistical calculations.To maintain data integrity and ensure ongoing data accuracy we implemented regular quality control procedures, including periodic reviews and spot-checking.This involved random sampling of entered data for extrinsic verification.We did not use data software to collect data. RESULTS A total of 323 patients were found via the initial search strategy.After removing duplicates and patients who were not pregnant and did not have a D-dimer test performed, 67 cases remained.These cases were cross-referenced with the system's internal radiology database, which records pregnancy status of all patients who received ionizing radiation, yielding an additional seven cases for analysis. During the study period, 74 patients were evaluated for PE.The patients were 19-38 years old (mean age 27.85).The highest percentage (41.9%) of patients were in the third trimester of pregnancy at the time of their evaluation.The presenting complaints of the patients reviewed are summarized in Table 1. Seven of the 74 patients reviewed did not have D-dimer testing completed, and thus were excluded from the analysis to determine the effectiveness of the pregnancy-adapted YEARS algorithm.Five of the excluded patients met at least one YEARS criteria, and two of those five patients were found to have a PE.These two patients comprise the 2.7% prevalence of PE in our study cohort.A breakdown of the range of D-dimer levels is represented in Table 2.Among the 67 patients included in the analysis, 47 patients (70.15%) met no YEARS criteria, and 20 patients (29.85%) met one or more YEARS criteria.Eighteen patients (90%) met the criteria of PE being considered the number one diagnosis, one patient (5%) had unilateral leg swelling, and one patient (5%) had both hemoptysis and PE considered as the number one diagnosis. Among the 47 patients who did not meet any of the three YEARS criteria, 35 (74.47%) had a D-dimer below the threshold of 1.0 milligrams per liter (mg/L), and 12 (25.53%)had a D-dimer greater than 1.0 mg/L.Of those 35 patients who should not have undergone CTPA based on the pregnancy-adapted YEARS algorithm, seven (20%) had a CTPA performed.These seven patients represent the patients who could have avoided radiation exposure with application of the YEARS-adapted algorithm.Four of these patients had D-dimer levels between 0.5-1.0mg/L, and three patients had a D-dimer level <0.5 mg/L.None of these seven patients were found to have a PE on imaging.Among the 28 patients who did not have a CTPA performed, 24 patients (85.71%) had a follow-up evaluation in the health system within 30 days, and none were found to have a VTE diagnosed.Four patients (14.29%) did not have a follow-up visit documented within 30 days of their PE workup in the ED.Of note, of the 28 patients who did not have a CTPA performed, 16 (67.14%)had D-dimer levels between 0.5-1.0mg/L. Of the 12 patients who met zero YEARS criteria and had a D-dimer of greater than 1.0 mg/L, 10 (83.33%) had a CTPA performed, all of which showed no PE.Two (16.67%) of these 12 patients did not have a CTPA performed.One of these did not have a follow-up visit documented within 30 days of their PE workup in the ED.However, this patient had no diagnosis of VTE or new anticoagulant medication listed on admission to labor and delivery. Of the 20 patients with one or more YEARS criteria, 19 (95%) had a D-dimer >0.5 mg/L, and one patient (5%) had a D-dimer of <0.5 mg/L.The patient with a D-dimer of <0.5 mg/L did not have a CTPA performed and had no VTE at 30-day follow-up.Of the 19 patients with D-dimer levels of >0.5 mg/L, 17 (89.47%)had CTPA imaging performed and one (5.26%)had a VQ scan done, none of which were positive for PE.One patient (5.26%) did not have CTPA imaging done. Our review indicated that 7 of 68 clinicians documented the use of the YEARS algorithm in their work-up.No clinician documented use of the pregnancy-adapted YEARS algorithm.However, 85.1% of the patients evaluated were treated in alignment with the pregnancy-adapted YEARS algorithm.Deviation from the YEARS criteria was observed Pregnancy-adapted YEARS Algorithm with seven patients who received unnecessary CTPA imaging and three patients who did not undergo imaging, despite meeting criteria.Two (66.67%) of these three patients met no YEARS criteria and had D-dimer levels >1.0 mg/L, and one patient (33.33%) had one or more YEARS criteria and a D-dimer level of 0.5 mg/L.The results of the pregnancy-adapted YEARS algorithm applied to our cohort are summarized in Figure 2. Additionally, 14 patients (20.89%) received a lower extremity Doppler, all of which were negative for DVT.Therefore, these patients followed the algorithm outlined in Figure 1.Outcomes of applying the pregnancy-adapted YEARS algorithm to our cohort are summarized in Table 3. DISCUSSION In March 2019, the ARTEMIS study was published demonstrating a 39% decrease in CTPA imaging among pregnant patients when using the pregnancy-adapted YEARS criteria. 14The ARTEMIS study showed that the pregnancy-adapted YEARS algorithm was able to safely rule out PE in pregnant patients.Following the ARTEMIS study, Langlois et al published a study in May 2019 further applying the pregnancy-adapted YEARS algorithm.This study retrospectively assessed the data from the CT-PE pregnancy study to externally validate the accuracy and safety of the pregnancy-adapted YEARS algorithm.The CT-PE pregnancy study found a 14% decrease in the need for CTPA. 18When the pregnancy-adapted YEARS algorithm was retrospectively applied to this data, 32 additional patients had PE excluded without the need for CTPA (78 in total, 21%).This resulted in almost twice as many patients being spared radiation exposure. 18he prospective ARTEMIS study and a subsequent retrospective study demonstrated the safety and efficacy of the pregnancy-adapted YEARS algorithm in pregnant patients in a European population.In our study we aimed to conduct an external validation study in the United States of those international studies.In our retrospective study, we found that 36 patients met no criteria to have a CTPA performed, but seven (19.4%) of these patients did receive a CTPA.None of these seven patients had PE detected via the imaging modality.This cohort represents the patients who could have avoided CTPA and radiation exposure if the pregnancy-adapted YEARS algorithm had been applied.Additionally, our cohort consisted of 28 patients who met zero YEARS criteria and had a D-dimer <1.0 mg/L.If a conventional D-dimer cutoff had been used, rather than the algorithm value, our patients would all have had a cutoff value of 0.5 mg/L. 16By the intention to diagnose approach, this conventional cutoff would have resulted in an additional 16 patients meeting criteria to undergo CTPA imaging, as 16 of the 28 patients with zero YEARS criteria had a D-dimer level between 0.5-1.0mg/L. Combining these with the seven patients who received unnecessary CTPA imaging, our study showed retrospective application of the pregnancy-adapted YEARS algorithm would have resulted in a 34.3% decrease in CTPA utilization.This is consistent with prior prospective studies showing 21% and 32-65% reductions. 14,18In other words, without actively following the pregnancy-adapted YEARS algorithm, the clinicians who evaluated the patients in our cohort used their clinical judgment to rule out a PE, despite an elevated D-dimer >0.5 mg/L in 16 patients.Given that a substantial percentage (85.1%) of the clinicians evaluated patients in concordance with the pregnancy-adapted YEARS algorithm, our study found that an additional 10.4% of CTPA utilization could have been avoided with active application of the algorithm because 7/67 patients underwent CTPA not in concordance with the algorithm.The ARTEMIS study featured 12 patients (6.2%) who underwent CTPA testing, despite no confirmed DVT and a D-dimer level below the threshold, which was defined as a protocol violation. 14Our study showed a similar outcome with seven patients (10.4%) receiving a CTPA despite a D-dimer below the threshold.Therefore, our study validates the current body of research on the YEARS algorithm and the potential utility of the pregnancy-adapted YEARS algorithm in a rural-suburban setting. Nevertheless, the results from this study have some notable differences compared to recent prospective studies.One difference was the number of patients in our study meeting any YEARS criteria, especially for hemoptysis or clinical signs of a DVT.Among the 67 patients included in the analysis, only 20 patients met one or more YEARS criteria (30%), and of those 20 patients one had unilateral leg swelling and one had both hemoptysis and PE considered as the number one diagnosis.This demonstrates the criterion of PE as the number one diagnosis being the largest contribution in our cohort, resulting in 40/67 (59.7%) patients with a negative YEARS algorithm.This criterion was subject to retrospective bias and may account for variation from previously published prospective studies.Notably, those previous prospective studies showed 49% and 75% of their cohort meeting one or more YEARS criteria. 14,18Our study additionally featured a smaller sample size than previously published studies, with 67 patients included in the analysis compared to 510 in the ARTEMIS study and and 395 in the Langlois study. 14,18owever, despite our relatively small sample size, we were able to achieve a wide and relatively even spread of gestational ages across all trimesters. To demonstrate the long-term applicability of the pregnancy-adapted YEARS algorithm, a 30-day chart follow-up was performed on the 36 patients who did not meet criteria for a CTPA.Five of these patients failed to follow up.None of the 31 patients who were reviewed demonstrated evidence of a PE or VTE upon follow-up.This further demonstrates consistency with other studies in the use of the Pregnancy-adapted YEARS Algorithm criteria in an acute diagnosis.All patients in the cohort were followed to completion of their pregnancy, and none had a new diagnosis of VTE or an anticoagulant listed on their medication list. Our study also showed that three of the 31 patients should have received a CTPA according to the pregnancy-adapted YEARS algorithm but did not receive it.These patients are included in the cohort who received treatment that was nonconcordant with the algorithm.The first of these patients was a 38-year-old woman in her first trimester with a D-dimer of 1.2 mg/L and no YEARS criteria, who was diagnosed with pneumonia.Literature suggests that pneumonia can cause an elevation of the D-dimer level. 22,23Pneumonia may present similarly to a PE and represents a diagnosis that could require use of the YEARS algorithm and result in unnecessary CTPA utilization. The second patient was a 28-year-old woman in her third trimester with a D-dimer of 0.76 mg/L and one YEARS criterion.The evaluating physician used a trimester-adjusted D-dimer and decided that CTPA was not necessary.Literature suggests that D-dimer values fluctuate during pregnancy, and its use alone is not sufficient in ruling out a PE regardless of trimester. 3,7The third patient was a 28-yearold woman in her third trimester with a D-dimer of 1.48 mg/L with no YEARS criteria.The evaluating physician decided the patient had unspecified dyspnea of unclear origin and ruled that CTPA was not necessary.There were no PE diagnoses for these patients on 30-day follow-up.If counted against the efficacy of pregnancy-adapted YEARS algorithm, additional reduction would decrease from 7/67 (10.4%) to 4/67 (6%), and total reduction with application of the algorithm would decrease to 20/67 (29.85%), which is consistent with prior prospective studies. Two patients in this study were diagnosed with a PE.Of note, neither of them had a D-dimer completed; therefore, they were excluded from the study.The first patient was 10 weeks pregnant.She presented with chest pain and shoulder pain that increased with inspiration.She had a complex superficial thrombosis of the lower extremity at the time of her workup and was being treated with low molecular weight heparin.Repeat duplex in the ED showed extension of the clot into the deep venous system.The patient's case was discussed with a maternal fetal medicine physician who recommended CTPA. The second patient was 33 weeks pregnant.She presented with back pain and was known to be positive for COVID-19 prior to arrival.She also complained of increasing dyspnea and pleuritic chest pain.Given her symptoms and multiple risk factors for clots, the clinician felt that urgent CTPA was necessary.Although these patients were not included in the analysis, they were incorporated into our results for the prevalence of PE during our study period, which was 2.9%.The prevalence of PE in the ARTEMIS study was 5.4%, and in the Langlois study was 6.5%. 14,18Therefore, our cohort had a lower prevalence of PE compared to the prior European studies.This is also consistent with literature demonstrating the prevalence of ED patients tested for PE in Europe to be 23% compared to 8% in North America. 19n total, 57 of the 67 patients (85.1%) in this study were treated in concordance with the pregnancy-adapted YEARS algorithm despite only seven physicians documenting the use of YEARS in their workup.This may indicate that there has already been an informal adoption of the pregnancy-adapted YEARS algorithm in clinical practice.The methodology used by clinicians in the workup of this patient population is similar to the proposed algorithm, which may demonstrate the pregnancy-adapted YEARS algorithm has a higher propensity to be used in clinical practice.However, additional studies are warranted to further elucidate the clinical significance of the pregnancy-adapted YEARS algorithm in different settings and populations Future research should be aimed at demonstrating safety of the algorithm applied to populations in the US. LIMITATIONS This retrospective study is not without its limitations.First, it introduced selection bias in the cohort that was reviewed.The reviewed charts were not originally designed for research; therefore, pertinent information may have been omitted.The criterion of PE as the number one diagnosis falls victim to retrospective bias.Unless explicitly stated, it was subjective in discerning whether the physician believed PE was a primary concern during the medical decisionmaking process.Another limitation was our small cohort of patients.This may limit the applicability of our results to larger populations.Therefore, the findings and conclusions drawn from this study should be interpreted with caution, recognizing the potential limitations associated with the small sample size.Finally, this study took place in a single health system in northeastern Pennsylvania and may not represent all populations. CONCLUSION Previous prospective studies applying the pregnancyadapted YEARS algorithm in Europe found 21% and 32-65% reductions in CTPA imaging for pregnant patients with suspected pulmonary embolism. 8,18Our retrospective study found similar conclusions of the pregnancy-adapted YEARS algorithm.Thus, this study serves as external validation for previous literature in Europe within the United States.Furthermore, this study demonstrated that most clinicians used clinical rationale concordant to the pregnancy-adapted YEARS algorithm, which indicates a potential for widespread adoption for the evaluation of pulmonary embolism in pregnant patients. Figure 1 . Figure 1.Pregnancy-adapted YEARS algorithm for management of suspected acute pulmonary embolism in pregnant patients.DVT, deep-vein thrombosis; PE, pulmonary embolism; YEARS, diagnostic algorithm for pulmonary embolism; CT, computed tomography; g/mL, grams per milliliter. Figure 2 . Figure 2. Flow chart of pregnancy-adapted YEARS algorithm in a retrospective diagnostic review.PE, pulmonary embolism; CTPA, computed tomography pulmonary angiography; f/u, follow-up. Studies show that approximately 9% of pregnancy-related deaths in the United States are due to a Western Journal of Emergency MedicineVolume 25, No. 1: January 2024 136 Table 1 . Pregnancy demographics and chief complaints of patients suspected of pulmonary embolism. Table 2 . Breakdown of number of patients within certain ranges of D-dimer levels stratified by YEARS criteria met.* Volume 25, No. 1: January 2024 Western Journal of Emergency Medicine 139 Mileto et al. Table 3 . Outcomes of pregnancy-adapted YEARS algorithm retrospective utilization.Patients who were treated non-concordant to the pregnancy-adapted YEARS algorithm.PE, pulmonary embolism; CTPA, computed tomography pulmonary angiography; VTE, venous thromboembolism; V/Q, ventilation/perfusion scan. * Western Journal of Emergency Medicine Volume 25, No. 1: January 2024 142 Pregnancy-adapted YEARS Algorithm Mileto et al.
v3-fos-license
2016-05-04T20:20:58.661Z
2006-01-01T00:00:00.000
2723407
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcfampract.biomedcentral.com/track/pdf/10.1186/1471-2296-6-51", "pdf_hash": "db9f7cd1bfb1cd58ab5e44ee97a443eb56f0c565", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41722", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "414e633bc1f6cc027edcc6ea3f0681d58eed7276", "year": 2015 }
pes2o/s2orc
Difficulties associated with outpatient management of drug abusers by general practitioners . A cross-sectional survey of general practitioners with and without methadone patients in Switzerland Background: In Switzerland, general practitioners (GPs) manage most of the patients receiving methadone maintenance treatment (MMT). Methods: Using a cross-sectional postal survey of GPs who treat MMT patients and GPs who do not, we studied the difficulties encountered in the out-patient management of drug-addicted patients. We sent a questionnaire to every GP with MMT patients (556) in the French-speaking part of Switzerland (1,757,000 inhabitants). We sent another shorter questionnaire to primary care physicians without MMT patients living in the Swiss Canton of Vaud. Results: The response rate was 63.3%. The highest methadone dose given by GPs to MMT patients averaged 120.4 mg/day. When asked about help they would like to be given, GPs with MMT patients primarily mentioned the importance of receiving adequate fees for the care they provide. Secondly, they mentioned the importance of better training, better knowledge of psychiatric pathologies, and discussion groups on practical cases. GPs without MMT patients refuse to treat these patients mostly for emotional and relational reasons. Conclusion: GPs encounter financial, relational and emotional difficulties with MMT patients. They desire better fees for services and better training. Background Methadone maintenance treatment (MMT) is extensively used for opiate addiction. Providing care in an officebased practice is feasible [1,2] and produces outcomes comparable to those from specialist treatment [1,3-7]. Furthermore, it reduces the stigma associated with the diagnosis and treatment of substance abuse and increases the amount of attention paid to medical and psychiatric conditions [3,4]. Easy geographical access to treatment encourages employment rehabilitation and retention in treatment [4]. However, GPs encounter specific difficulties with this population: burnout, lack of training, a negative attitude and a lack of motivation have been widely reported [3,8,9]. These difficulties prevent some GPs from accepting MMT patients. Furthermore, each country manages MMT in a different way: the UK encourages every general practitioner to prePublished: 19 December 2005 BMC Family Practice 2005, 6:51 doi:10.1186/1471-2296-6-51 Received: 16 May 2005 Accepted: 19 December 2005 This article is available from: http://www.biomedcentral.com/1471-2296/6/51 © 2005 Pelet et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Background Methadone maintenance treatment (MMT) is extensively used for opiate addiction. Providing care in an officebased practice is feasible [1,2] and produces outcomes comparable to those from specialist treatment [1,[3][4][5][6][7]. Furthermore, it reduces the stigma associated with the diagnosis and treatment of substance abuse and increases the amount of attention paid to medical and psychiatric conditions [3,4]. Easy geographical access to treatment encourages employment rehabilitation and retention in treatment [4]. However, GPs encounter specific difficulties with this population: burnout, lack of training, a negative attitude and a lack of motivation have been widely reported [3,8,9]. These difficulties prevent some GPs from accepting MMT patients. Furthermore, each country manages MMT in a different way: the UK encourages every general practitioner to pre-scribe methadone through national policies and guidelines; France usually reserves MMT for specialized centers and promotes the use of buprenorphine [10]; and the United States only recently began to allow GPs to prescribe MMT. In Switzerland, most patients on MMT are treated by GPs, currently using the oral liquid form of methadone. Buprenorphine use is very rare and codeine is not encouraged. Only some specialized centers treat opiate abusers with injectable heroin. GPs have to register for every opiate substitution with methadone to afford the double prescription. The Swiss government encourages substitution treatment for drug-addicted individuals in the context of a harmreduction policy, but does not push GPs to accept these patients and has never distributed national guidelines broadly as has the UK. Generally, however, Swiss GPs are used to providing pharmacotherapies and other treatments to drug users in our country. There is no shared care as in England, but Swiss GPs use a pragmatic approach, including meeting with social workers and pharmacists in charge of MMT patients. Groups of GPs involved with the drug-addicted have been created and receive support from the Federal Office of Public Health for continuous formation and clinical discussion. Specialized centers with psychiatrists, social workers, psychologists and medical doctors offer multidisciplinary management for unstable patients. However, in Switzerland as elsewhere, there are not enough specialized centers for all drug addicts requiring treatment, and access is limited by geographical barriers and the restricted number of treatment places that can be offered [11]. Furthermore, some geographical regions do not have specialized centers. Office-based treatment provides clear advantages for drug-abusing patients. However, primary care practitioners encounter specific difficulties with this patient population. Burnout, lack of training, a negative attitude and a lack of motivation have been reported widely among the GP population [3,8,9]. These difficulties discourage and prevent primary care physicians from accepting drug abusers for substitution treatment. The aims of this study were to (1) to describe the specific difficulties with the MMT population encountered by primary care physicians and to identify why primary care physicians are reluctant to manage drug-abusing patients in Switzerland; and (2) to identify primary care physicians' needs in terms of future management of drug-abusing patients in the French-speaking part of Switzerland and to suggest solutions to help them. Methods We mailed a multiple-choice questionnaire designed to evaluate various aspects of the difficulties encountered in treating MMT patients: pharmacological issues (highest methadone dose), legal requirements, financial issues (how GPs get paid), emotional and psychiatric aspects (including psychiatric medication and referral to a psychiatrist), relationships, multidisciplinary interactions (e.g. with social workers or with pharmacists), motivation of primary care physicians, and specific management in the office setting. We collected the material to develop this questionnaire during semi-formal interviews with the staff at Saint-Martin (a specialized center for managing drug abusers). MedRoTox practitioners (a group of general practitioners concerned with the problem of dependency and supported by the Swiss Federal Office of Public Health) reviewed the questionnaire. During the calendar year 2000, we mailed it for anonymous completion to every primary care practitioner with MMT patients in the French-speaking part of Switzerland (556 physicians). This figure includes every GP prescribing methadone in this part of the country, which has a population of 1,757,000. We sent the questionnaire again three months later to increase the response rate and received answers over the following three months. We sent another questionnaire to GPs without MMT patients to evaluate more specifically the factors that kept them from accepting these patients. We used the mailing list kept by the outpatient clinic at Lausanne University Hospital. These 365 GPs represent most of the primary care practitioners in the Swiss canton of Vaud. This questionnaire, also for anonymous completion, was also sent twice at an interval of three months. Both questionnaires are available from the corresponding author. We used descriptive statistics and the chi-square test for comparison. We performed all statistical analyses by SPSS software, version 11. Results Of the 556 targeted GPs with MMT patients (PT: practitioners with MMT patients), 63.3% (352) responded. We received replies from 231 (63%) of the targeted 365 primary care physicians without MMT patients (PWT: practitioners without MMT patients). Table 1 shows the profiles of both groups of GPs. Both populations (PT and PWT) were similar in terms of gender frequencies, practice location and the percentage who work in a group practice. The only statistically significant difference was the mean number of years in medical practice (PT: 14.8 years, PWT: 17 years). Table 2: Of the total PTs respondents, 73% would not accept more patients. The mean number (± SD) of MMT patients that a PT would like to have (5.8 ± 6.9, median 4) was slightly less than the mean number that they actually treat (mean: 6.2 ± 9.04, median 4). Responding PTs reported an average highest daily methadone dose of 120.4 mg/day (median = 100 mg/day, mode = 100 mg). The percentage of PWTs who had received requests for methadone treatment was 52%. Of the responding PWTs, 42.9% had been involved in methadone treatment in the past but were no longer treating such patients. Of the PWTs, 88.7% did not treat MMT patients because they refused to accept them into their practice. Table 3 shows the improvements reported by both PTs and PWTs as necessary for improving MMT patient management. PTs mostly emphasized better reimbursement for related items of service. They also frequently mentioned better training (post-graduate or specialized psychiatric training) and more interaction with other professionals, including groups for discussing clinical cases. PWTs gave priority to having more centers and more specialized professionals for treating drug-addicted patients. Of the PT group, 56% of physicians reported difficulties with medical care reimbursement (table 4). Also, a total of 76.1% had learned about MMT through their own practices rather than receiving formal training. Most PTs are interested in investing time in further training. Table 5 shows reasons why PWTs refuse MMT patients. PWTs rated non-compliant patients as the biggest obstacle to management (59.7%) and preferred patients to be managed by a specialized center (57.1%). The "time-consuming" nature of treatment for drug-addicted patients was another major reason (54.5%) cited for not accepting them. Discussion The disinclination of the PWTs to treat MMT patients (88.7%) raises the question of how to change this attitude, especially in light of the growing need for MMT (9,700 patients treated with methadone in 1991 and 15,382 in 1997 in Switzerland) [12] and the lack of specialized centers and government policies encouraging easy access to MMT. More medical training, specific training during residency and the development of faculty role models would probably contribute to improving attitudes [7]. A strikingly high percentage of GPs refuse to treat MMT patients for reasons linked to relationships or management of emotions (noncompliant patients, fears, feeling of powerlessness, or burnout in the past, as illustrated at Table 5). Again, better role models, a more positive attitude during basic medical training and greater emotional and professional support for practitioners involved with MMT patients could perhaps overcome these barriers. Of the PWTs, 11.3 % would still accept MMT patients. This information suggests an area warranting further research, and we need better tools to identify, reach, teach and encourage these physicians. The highest average daily dose of methadone (120.4 mg/ day, table 2) is not surprising in view of the recommendations in the literature: although daily doses of methadone may differ from one patient to another, some authors recommend daily doses between 60 and 100 mg/day [15][16][17][18][19]. A UK postal survey addressed to GPs in 2001 identified a mean methadone dose of 36.9 mg [14]; although this information (the mean) differs from the information obtained in our study (highest methadone dose prescribed), our result is still higher than expected: generally, GPs are known to prescribe low doses of methadone, contrary to international recommendations [14,16]. However, Swiss GPs with MMT patients seem to be more aware of these recommendations. We hypothesize that PTs may have better formation and could be more concerned about methadone issues. When asked how the management of MMT patients could be improved (table 3), PTs first mentioned better reimbursement for services provided. In Switzerland, patients pay part of the costs of health care, with the mandatory health insurance system picking up the rest of the bill. In some instances, health insurance companies reimburse patients so that they can pay their physicians directly. This practice was intended to create a stronger sense of responsibility among patients for the cost of their medical treatment. However, it is often difficult for a drug-addicted patient to reimburse physicians with these funds; the drug-addicted usually have difficulties with money management and may spend the money instead on illicit drugs. If the patient does not pay, money is rapidly deducted from the social help provided to pay the monthly health insurance. But the physician may not receive the portion for which the patient is responsible. This payment system is specific to Switzerland but shows that adequate reimbursement is important. Weinrich and Stuart have demonstrated that professional and financial help are crucial for primary care practitioners in Scotland [2]. In the UK, financial rewards for general practitioners helped them to accept and continue working with MMT patients [2,14]. In contrast, PWTs suggest increasing the number of specialized centers and developing more accessible specialized professional help as the first steps towards improving MMT patient management (table 5). These suggestions are fully in keeping with their attitudes about not accepting MMT patients. As the results showed, significantly more PTs than PWTs felt that better postgraduate training could improve the management of patients in MMT. PTs mentioned lack of training as the second area for improvement in the management of MMT patients, suggesting better postgraduate formation, discussion groups focusing on clinical cases, better knowledge of psychiatric pathologies, and better training during residency. The need of GPs for adequate training in addiction is well known [9]. Miller et al. [13] emphasized the lack of specific training at medical schools and the absence of a positive attitude and role models among faculty and physicians. In the French-speaking part of Switzerland, two universities have medical schools. One (the University of Lausanne) offers a 12-hour teaching module including alcohol-and drug-related dependence and one day of practice in the psychiatric service; the other (the University of Geneva) has 80 hours of teaching on alcohol-and drug-related problems. However, this formal training in addiction began only a few years ago. Each region also has its own training opportunities, depending on the local network. Although local discussion groups for clinical cases already exist, the high percentage of PTs who expressed a need for more training points to a current overall lack of training. Our study found that 76.1% of PTs have learned through their own daily practice how to manage methadone treatment (table 4). This is another powerful illustration of the lack of training that doctors receive and makes an urgent case for the development of better training opportunities for primary care practitioners who provide methadone treatment in Switzerland. Two concerns mentioned with similar frequency by PTs were the need for more political support in the treatment of drug-addicted patients (provision of more centers, more specialized professionals) and the need for more accessible specialists (table 3). Interestingly, PTs are more interested in improving their own practices (through reimbursement of fees and training) than in developing other infrastructure for treating drug abusers. This finding illustrates the concentration of MMT patient treatment in the outpatient setting in Switzerland. The median number of patients managed by the PTs represented in the survey is fairly low (four per practitioner, table 2) but is comparable with a recent postal survey in England (3.58 patients with opiate substitution per prescribing GP) [14]. However, when asked how many MMT patients they would like to treat, PTs responded with an even lower number. This finding underlines the limited capacity of a single physician to accept and treat MMT patients and the burden represented by these patients. In Switzerland, the primary care practitioners who accept patients for methadone treatment probably represent those doctors who are more trained and more interested in MMT patients. Our study has some limitations. First, the PWT population was sampled from the Canton of Vaud. We intentionally chose this Canton because it includes both an urban and a rural population. Both survey populations (PT and PWT) were similar in terms of sex, location in a village or a town (>10,000 inhabitants) and the type of practice (single or shared). The only statistically significant difference was the mean number of years of practice in the two populations. Age and training in treating drug misuse have been shown to affect attitude (9); thus, younger practitioners may be more likely to consider MMT patients as medical patients than as stigmatized individuals. Furthermore, older practitioners were not as accustomed to delivering methadone. The response rate to the survey (63.3% of PT and 63% of PWT) is another limitation. Although it is a high rate for a nine-page questionnaire, the survey still represents the opinions of only some practitioners. Compared with the literature, it was better than expected from a busy general practice [20]. Even if the return rate was similar for PTs and PWTs, there could be a bias of greater interest in the subject or of greater personal involvement. A final limitation is the fact that the questionnaire was not formally validated by inter-rater techniques. Conclusion Each country has specific needs and characteristics for drug management, depending on government policies and the existing health care network. Switzerland has a policy of decentralization and harm reduction based on a low-threshold approach and a broad access to MMT through GPs, a policy embroiled in major political and emotional controversies. "Shared care" only exists in specialized centers in the form of multidisciplinary work with drug nurses, psychiatrists, social workers and GPs, but is reserved for more disruptive and unstable patients. In Switerzland, MMT prescribed outside specialized centers involves only a highly selected group of GPs trained in the addiction field. National policies, however, encourage GPs to work in multidisciplinary teams and to meet regularly with social workers and other healthcare providers. If the intention is specifically to achieve a low-threshold approach through generalizing MMT prescription, we need to listen to suggestions made by the principal players, i.e. the general practitioners. PTs want reimbursement for their services and better training. The growing needs of drug-addicted patients, the spread of HIV and the greater emphasis on harm-reduction policies are surely powerful reasons for answering this plea and providing support to practitioners who accept MMT patients.
v3-fos-license
2022-04-01T05:16:03.203Z
2022-03-30T00:00:00.000
247839284
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0266160&type=printable", "pdf_hash": "cf1db1d074c745f677aa9c48a0d75c37a2df82d8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41725", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cf1db1d074c745f677aa9c48a0d75c37a2df82d8", "year": 2022 }
pes2o/s2orc
Patterns of home care assessment and service provision before and during the COVID-19 pandemic in Ontario, Canada Objective The objective was to compare home care episode, standardised assessment, and service patterns in Ontario’s publicly funded home care system during the first wave of the COVID-19 pandemic (i.e., March to September 2020) using the previous year as reference. Study design and setting We plotted monthly time series data from March 2019 to September 2020 for home care recipients in Ontario, Canada. Home care episodes were linked to interRAI Home Care assessments, interRAI Contact Assessments, and home care services. Health status measures from the patient’s most recent interRAI assessment were used to stratify the receipt of personal support, nursing, and occupational or physical therapy services. Significant level and slope changes were detected using Poisson, beta, and linear regression models. Results The March to September 2020 period was associated with significantly fewer home care admissions, discharges, and standardised assessments. Among those assessed with the interRAI Home Care assessment, significantly fewer patients received any personal support services. Among those assessed with either interRAI assessment and identified to have rehabilitation needs, significantly fewer patients received any therapy services. Among patients receiving services, patients received significantly fewer hours of personal support and fewer therapy visits per month. By September 2020, the rate of admissions and services had mostly returned to pre-pandemic levels, but completion of standardised assessments lagged behind. Conclusion The first wave of the COVID-19 pandemic was associated with substantial changes in Ontario’s publicly funded home care system. Although it may have been necessary to prioritise service delivery during a crisis situation, standardised assessments are needed to support individualised patient care and system-level monitoring. Given the potential disruptions to home care services, future studies should examine the impact of the pandemic on the health and well-being of home care recipients and their caregiving networks. Results The March to September 2020 period was associated with significantly fewer home care admissions, discharges, and standardised assessments. Among those assessed with the interRAI Home Care assessment, significantly fewer patients received any personal support services. Among those assessed with either interRAI assessment and identified to have rehabilitation needs, significantly fewer patients received any therapy services. Among patients receiving services, patients received significantly fewer hours of personal support and fewer therapy visits per month. By September 2020, the rate of admissions and services had mostly returned to pre-pandemic levels, but completion of standardised assessments lagged behind. Conclusion The first wave of the COVID-19 pandemic was associated with substantial changes in Ontario's publicly funded home care system. Although [1]. By September 30, 2020, Canada had recorded 158,758 COVID-19 cases and 9,327 COVID-19 deaths [2]. The effects of COVID-19 were disproportionately borne by residents and staff in long-term care settings, particularly in Quebec and Ontario. Through the first wave, 12% of COVID-19 cases and 75% of COVID-19 deaths in Canada occurred in long-term care homes [3]. Essential care partners and health system leaders identified a number of contributing factors in the long-term care system [4], many of which also applied to the home care system such as the lack of funding stability [5], increased patient acuity [5,6], and a personal support workforce that is marginalised and under-valued [7]. Yet little is known about how the COVID-19 pandemic affected Canadians receiving publicly funded home care services. Home care services refer to an array of home-based personal and professional supports, including but not limited to, personal support and homemaking services, nursing services, occupational therapy, physical therapy, and social work. Receiving care at home promotes independence and physical, mental, and social well-being while providing a less expensive alternative to institutional care and creating health system capacity [8,9]. Individuals often rely on support from informal or unpaid caregivers such as family members. Many also receive care from formal providers who may be paid by the Ontario Health Insurance Plan (i.e., public insurance), private insurance, or out-of-pocket. In Canada's most populous province, an estimated 5.2% of Ontarians receive publicly insured home care services [8] that are coordinated by the 14 Home and Community Care Support Services organisations (HCCSS) and delivered by contracted service provider agencies. Once an individual is connected to their local HCCSS (either by referral or calling their organisation directly), HCCSS care coordinators assess the individual's needs and develop the care plan. The interRAI Home Care assessment and the interRAI Contact Assessment are standardised assessments used with public home care patients in most Canadian provinces including Ontario. At the person level, standardised clinical assessments are used to identify the type and degree of needs, tailor care plans, and track health status. Organisations regularly submit interRAI assessment data to the Canadian Institute for Health Information (CIHI) for system-level monitoring of health outcomes and quality indicators. interRAI is an international not-for-profit network of researchers and health and social service professionals who develop and support standardised comprehensive assessment tools and applications for a variety of health care settings [10][11][12]. The interRAI Home Care assessment (or its earlier version RAI-Home Care) and interRAI Contact Assessment have been used in Ontario's publicly funded home care system since 2002 and 2010, respectively. Previous research has established the validity and reliability of these assessments [13][14][15][16][17][18][19]. The interRAI Contact Assessment (about 50 items) is used to screen new home care patients for key health and social needs and serves as a minimum data set for those who do not require further assessment. The interRAI Home Care assessment (about 250 items) is much more comprehensive and measures cognition, communication, mood and behaviour, psychosocial well-being, physical functioning, continence, disease diagnoses, health conditions, oral and nutritional status, skin condition, medication, treatments and procedures, social supports, environmental assessment, and discharge potential. The interRAI Home Care assessment also produces clinical scales and care planning protocols. In Ontario, the vast majority of new home care patients receive the interRAI Contact Assessment within 2 to 6 weeks, followed by an interRAI Home Care assessment if they are expected to require home care services for longer than two months (i.e., long-stay patients) [20]. Reassessments are normally done by regulated health professionals (e.g., nurses) every 6 to 12 months, or sooner if prompted by a significant change in the patient's health. Between March and June 2020, a CIHI report found that home care patients in four Canadian provinces were less likely to receive a standardised clinical assessment [21]. The CIHI report was unable to examine whether service patterns changed although other publications suggested a reduction in supply and demand for formal home care services during this period. Service provider agencies faced staffing shortages, individual providers experienced safety concerns and other job challenges, and patients and families may have placed their services on hold to limit the risk of viral transmission [22][23][24][25]. Changes in routine assessment and service provision could have led to individual-and system-level consequences. Missed assessments may have increased the risk of overlooking important health changes while missed services may have led to gaps in care and increased the burden on patients and families. We are aware that some jurisdictions completed non-standardised paper-based instruments early in the pandemic as a brief screening approach. This was concerning because these data were neither available to CIHI for its health system performance reports nor could be compared with standardised assessments completed in other health care settings (e.g., long-term care). Also, these instruments did not provide decision support tools (e.g., scale scores, risk algorithms) that could inform timely decision-making. For this reason, we focused on standardised assessments, which was distinct from the total number of assessments or assessed patients. Our study sought to compare the patterns in home care episode, standardised assessment, and service volumes in Ontario's publicly funded home care system before and during the COVID-19 pandemic. The goal was to provide an understanding of the province's home care operations during the pandemic and help to inform future strategies to ensure continuity of assessment approaches in the face of system-level crises. Study design and setting We plotted monthly time series data for publicly funded home care recipients in Ontario, Canada. Data from March 2020 to September 2020 represented the period of interest (i.e., during the COVID-19 pandemic) and data from March 2019 to February 2020 were used for comparison (i.e., before the COVID-19 pandemic). Data sources Ontario Health maintains the Home Care Database (HCD) that stores assessment and administrative data on all publicly funded home care services coordinated by HCCSS and delivered and paid to service provider agencies. An existing data sharing agreement permitted the transfer of data from Ontario Health to the interRAI Canada research group at the University of Waterloo. All data were anonymised by Ontario Health although a linking field (i.e., patient identifier) was generated to allowing merging of data tables. Use of these data and the processes in place to protect patient privacy and confidentiality were approved by the University of Waterloo's Office of Research Ethics (ORE# 18228). At the time of writing, data up to September 2020 were available. Assessment data. This study used the following validated scales and algorithms from the interRAI Home Care assessment: Activities of Daily Living Hierarchy Scale (ADLH) ranges from 0 to 6 with higher levels indicating greater difficulty in performing activities of daily living [26]; Cognitive Performance Scale 2 (CPS2) ranges from 0 to 8 with higher levels indicating greater cognitive impairment [27]; Depression Rating Scale (DRS) ranges from 0 to 14 with higher levels indicating more and/or frequent depressive symptoms [28]; Communication Scale ranges from 0 to 8 with higher levels indicating greater difficulty with making self-understood and ability to understand others; Changes in Health, End-stage disease, Signs, and Symptoms Scale (CHESS) ranges from 0 to 5 with higher levels indicating greater health instability [29,30]; Personal Support (PS) Algorithm ranges from 1 to 6 with higher groups suggesting greater need for personal support services [31]. Additionally, items assessing recent changes in decision-making and functional status were used to code for cognitive and functional decline. This study also used the following from the interRAI Contact Assessment: CHESS-CA ranges from 0 to 5 and has the same interpretation as CHESS with higher levels indicating greater health instability [32]; and Rehabilitation Algorithm ranges from 1 to 5 with higher levels suggesting greater need for therapy services [33]. Administrative data. Patient-level demographic information, admission and discharge dates, and home care services were retrieved from HCD. Personal support services and shift nursing were reported in the number of hours. Non-shift nursing, occupational therapy, and physical therapy were reported in the number of visits. In this paper, nursing hours and visits were summed to represent total nursing services. Sample The full sample comprised of all adults (age � 18 years) who received publicly funded home care services in Ontario between March 2019 and September 2020. The full sample was used to report on monthly admissions and discharges, interRAI Home Care assessments, interRAI Contact Assessments, and home care services. We created a sub-sample by linking each patient's services with their most recent assessment, which was used to stratify service patterns by indicators of potential need. For interRAI Home Care assessments that are typically done within 12 months, we applied a 13-month lookback period up to February 2020 (i.e., before the COVID-19 pandemic). We extended the lookback period to 16 months between March and September 2020 to allow for overdue assessments during the COVID-19 pandemic. For interRAI Contact Assessments that are typically done within 6 weeks, we applied a 2-month lookback period. Services that could not be linked to any standardised assessment were excluded. This sub-sample was used to report on the receipt of personal support services by PS Algorithm, nursing services by CHESS and CHESS-CA, and occupational and physical therapy services by cognitive or functional decline and Rehabilitation Algorithm. Details about the sample selection were summarised in a flow diagram provided in the S1 Fig. To aid with readability, we used "screening assessment" and "comprehensive assessment" in place of the official instrument names in the results and discussion sections. Analysis For each calendar month, total counts were presented using line and bar charts. We performed interrupted time series analyses using Poisson, beta, and linear regression models to detect significant changes in trends [34]: where β 1 is the slope during the pre-pandemic period (i.e., March 2019 to February 2020) and β 2 and β 3 are the level and slope changes during the pandemic period (i.e., March 2020 to September 2020), respectively. For patients with a linkable interRAI assessment, we examined the receipt of home care services in two ways: the proportion of patients receiving any home visit of the service type, and the adjusted monthly number of visits (or hours) among those receiving any home visit of the service type. We adjusted the amount of services by the number of valid days, so that time before a patient was admitted or after a patient was discharged was not counted in the denominator. To account for lagged effects, the service models were run with both March and April 2020 as the start of the pandemic period. Chi-Square tests were used to detect significant differences in patient characteristics before and after the pandemic period. We selected a significance threshold of p<0.05. All analyses were done using SAS 9.4 (SAS Institute Inc., Cary, NC). Results Before the COVID-19 pandemic, Ontario's publicly funded home care system admitted 31,105 patients and discharged 30,625 patients in an average month (Fig 1). At the onset of the pandemic, admissions fell by 10.2% and 37.8% in March and April, respectively (β 2 : p<0.0001). In contrast, discharges increased by 4.0% in March then fell by 16.6% and 32.0% in April and May, respectively (β 2 : p<0.0001). During the ensuing months, both volumes increased until the number of admissions and discharges had reached 97% and 95% of their pre-pandemic averages, respectively (β 3 : p<0.0001). Before the COVID-19 pandemic, Ontario's publicly funded home care system completed 22,741 comprehensive assessments and 23,557 screening assessments in an average month (Fig 1). There was a slight but significant downward trend in assessment volumes from March 2019 to February 2020 (β 1 : p<0.0001). At the onset of the pandemic, comprehensive PLOS ONE assessments declined by 25.6% and 57.7% in March and April, respectively (β 2 : p<0.0001). Screening assessments followed a less steep decline in the same period, where the number of screening assessments fell by 15.7% and 38.5% (β 2 : p<0.0001). Although both assessments demonstrated a significant positive trend during the pandemic period (β 3 : p<0.0001), volumes remained lower than usual. By September 2020, the number of comprehensive and screening assessments had reached 59% and 88% of their pre-pandemic averages, respectively. As expected, the pattern of screening assessments (that are typically used to assess new home care patients) appeared the mirror the pattern of admissions. Table 1 compared sociodemographic and clinical characteristics of home care patients who received a comprehensive assessment between March and September in 2019 and 2020 (as depicted by the black line in Fig 1). While the proportions of assessed patients older than 65 years and identifying as female did not vary significantly between cohorts, there were small but significant differences in marital status, living arrangement, and residence type. Post-hoc comparisons showed that significantly more long-stay patients assessed during the COVID-19 period had never been married, lived alone or lived with relatives other than a spouse/partner, and lived in a private home/apartment or rented room. Long-stay patients assessed during the PLOS ONE Home care assessment and service provision before and during the COVID-19 pandemic in Ontario, Canada COVID-19 period also had significantly more complex health needs across the major clinical domains. The prevalence of health instability increased from 27.0% to 31.5%, communication impairment increased from 16.1% to 17.8%, and cognitive impairment increased from 43.9% to 47.4%. Before the COVID-19 pandemic, Ontario's publicly funded home care system coordinated the delivery of approximately 2.9 million hours of personal support services, 600,000 units of nursing services (combined hours and visits), and nearly 100,000 therapy visits in a typical month (Fig 2). Personal support services fell by 1.4% and 18.9% in March and April, respectively. Nursing services increased by 0.2% in March and fell by 8.5% in April. Occupational and physical therapy services declined by 11.9% and 40.2% in March and April, respectively. All service types reached their lowest volumes in April and increased steadily during the ensuing months. By September 2020, the volumes of personal support, nursing, and therapy services had reached 94%, 105%, and 105% of the pre-pandemic averages, respectively. Fig 3 examined the receipt of personal support services among long-stay patients stratified by increasing need for personal support services defined by higher PS Algorithm groups. Among those assessed with the comprehensive assessment, patients in all PS Groups were significantly less likely to receive any personal support services in either March or April 2020 (all β 2 : p<0.05). The relative declines were similar across groups, averaging about -3% to -5% from their pre-pandemic averages. For those receiving any personal support services, the onset of the pandemic was also associated with lower median allocations (all β 2 : p<0.05). In March 2020, the adjusted monthly amount of personal support ranged from -0.8 hours in PS Group 2 (-11.5% change) to -2.9 hours in PS Group 6 (-5.6% change) compared to the previous year. Across most PS groups, the interaction terms representing the slopes during the pandemic period were positive and significant, indicating both the likelihood of receiving personal support services and the amount of personal support services received increased over time. By September 2020, median allocations exceeded pre-pandemic averages by 4 to 6%. Fig 4 examined the receipt of nursing services among patients identified to have high levels of health instability (i.e., CHESS or CHESS-CA 4 or 5). Among those assessed with the comprehensive assessment, the proportion of patients with high health instability who received any nursing services significantly increased from 33.5% (pre-pandemic average) to 37.9% PLOS ONE Home care assessment and service provision before and during the COVID-19 pandemic in Ontario, Canada (April 2020) (β 2 : p = 0.001). Among those assessed with the screening assessment, the proportion of patients with high health instability who received any nursing services significantly increased from 77.6% (pre-pandemic average) to 83.0% (April 2020) (β 2 : p = 0.03). The interaction terms were negative and significant, indicating these proportions fell in the subsequent months (β 2 : p<0.05). No significant level or slope changes were observed in the adjusted monthly amount of nursing services. Fig 5 examined the receipt of occupational or physical therapy services among patients with potential rehabilitation needs. Among those assessed with the comprehensive assessment, the proportion of patients who experienced a recent decline in cognitive or functional status and PLOS ONE Home care assessment and service provision before and during the COVID-19 pandemic in Ontario, Canada received any therapy services fell from 21.1% (pre-pandemic average) to 15.4% (April 2020) (β 2 : p<0.0001). The median amount of therapy services also fell from 1.8 hours per month (pre-pandemic average) to 1.0 hours per month (April 2020) (β 2 : p<0.0001). A similar pattern of decline was observed for patients assessed with the screening assessment and identified to have high rehabilitation needs, where the proportion receiving any therapy services fell from 64.9% (pre-pandemic average) to 60.4% (April 2020) (β 2 : p = 0.01). The median amount of therapy services also fell from 2.6 hours per month (pre-pandemic average) to 2.3 hours per month (April 2020) (β 2 : p<0.0001). All interaction terms representing the slopes during the pandemic period were positive and significant. By September 2020, the proportion of longstay and short-stay patients receiving any therapy services had reached 97% and 102% of the pre-pandemic averages, respectively. Discussion Ontario was substantially affected by the first wave of the COVID-19 pandemic. By September 2020, Ontario recorded 51,710 COVID-19 cases and 2,848 COVID-19 deaths, accounting for 32.6% and 30.6% of Canada's COVID-19 cases and deaths [2]. Numerous reports highlighted the wide-ranging impacts of the pandemic across the health system, including substantial outbreaks and extended lockdowns in long-term care homes [35,36], fewer preventive and chronic care visits in primary care settings [37], as well as fewer emergency department visits and hospital presentations and cancellations of planned surgeries [38][39][40]. Our findings were consistent with a recent CIHI report [21] that Ontario's publicly funded home care system completed significantly fewer standardised assessments during the March to September 2020 period. Further, our study demonstrated that this period was significantly associated with fewer home care admissions and discharges, and reductions in both the proportion of patients receiving personal support and therapy and the amount of these services received per patient. By September 2020, the rate of admissions and services had mostly returned to pre-pandemic levels; however, the recovery of standardised assessments lagged behind. Comprehensive assessments were more affected than screening assessments, which can at least be partially explained by differences in target populations and instrument design. The screening assessment is typically used at home care intake; therefore, it was expected that the volume of screening assessments recovered as quickly as home care admissions. As well, the screening assessment was validated for both in-person and phone use, which made it easier for assessors to pivot within existing practice to phone assessments. Accordingly, CIHI reported that there was a 53% increase in phone-based screening assessments between April and June 2020 compared to the same period in 2019 [21]. In contrast, the comprehensive assessment was designed to be completed in-person in the patient's home, so that assessors integrate visual/sensory information about the patient and their home environment in addition to the patient and family's reported needs. Although interRAI released a guideline for completing the comprehensive assessment via video conferencing in March 2020 [41], patients may have had difficulties setting up or using the technology (e.g., positioning the camera) and providers would have needed time to update assessment policies and build confidence in the quality of data. Among those receiving a standardised assessment, the pandemic appeared to change patterns of personal support and therapy services moreso than nursing services. Therapy services declined by the largest percentage, raising the question of whether rehabilitation services were more likely to be perceived as care that could be delayed. Jones and colleagues [42] observed the same phenomenon among home care recipients with dementia and hypothesised that nursing services may have been considered more essential than other home care services. Where rehabilitation promotes functional reserve and can reduce the risk of poor outcomes such as falls, frailty, and hospitalisation, we argue that therapy services serve a critical role, particularly during a time of limited health system capacity [43]. Rapid adoption of virtual rehabilitation was likely a major contributor to recovery, for which there is emerging evidence of high quality of care and patient satisfaction [44]. We also found that fewer patients received personal support services and patients received fewer hours of services, which was consistent with the experiences of home care patients and their caregivers reported in other studies [23,24]. In this study, we further demonstrated that these patterns held true regardless of the degree of help needed with personal care. We speculate this reflected the similar opportunities and challenges created by the pandemic. For some, remote work or school arrangements offered more flexibility and time at home to be able to offer more caregiving, thus reducing the need for paid services [22,45]. Other families may have opted to cancel or pause services to mitigate the risk of virus transmission, especially in cases where multiple service providers were involved or service providers also worked in longterm care homes [23,24]. At the same time, caregiving networks may not have been able to completely replace the personal care delivered by formal providers. Many caregivers took on more caregiving responsibilities amid other commitments such as work, school, or childcare [45]. Additionally, families had to balance different safety risks when deciding whether to continue home care services. Although many caregivers reported high levels of anxiety related to exposure risks, they also feared they would not be able to manage without additional help, which would place both the care recipient and caregiver at risk of worsening health [24,45,46]. Families may have negotiated these risks by reducing the number of visits or limiting the number of service providers in the home. Although this study focused on paid home care services, it is important to understand the impacts of service changes on unpaid caregiving. Even before the appearance of COVID-19, one in three unpaid caregivers of home care patients experienced caregiver distress [47]. During the pandemic, unpaid caregivers had fewer options to ask for or hire external help despite being in even greater need of respite [45]. Distressed caregivers have been typically represented by those caring for loved ones with substantial personal care needs [47], but these analyses raise the question of whether home care service disruptions (particularly of personal support services) may have affected a broader group of caregivers. The sustainability of communitybased care relies on protecting the health and well-being of caregivers. This study has important implications for home care practice, research, and quality monitoring. Although Ontario's publicly funded home care system appeared to be functioning closer to normal, comprehensive assessment volumes remained 41% lower in September 2020 compared to the previous year. From a practice standpoint, this meant a substantial proportion of the home care population was not being (re-)assessed with a standardised assessment. During the first wave, some jurisdictions allowed assessors to complete the interRAI Home Care assessment by phone, while others continued to mandate in-person visits. Other jurisdictions switched to the interRAI Contact Assessment for patients for whom they would normally have completed a comprehensive assessment. As the province entered the second wave, some jurisdictions adopted the interRAI Check-Up Self-Report assessment intended to be used with patients with lighter care needs [48]. The alternatives, either postponing standardised assessments or adopting non-standardised assessments, were unsustainable because they could not provide a full picture of which patients may have gotten worse (with or without a positive COVID-19 status) and the magnitude of the problem. Importantly, home care clinicians and administrators should re-establish standardised assessments as a key function of home care operations. It is important to not lose sight of patient needs in the midst of any major crisis, including a global pandemic, and it is also essential to identify changes in individual health that have ongoing consequences (e.g., new mental health concerns, functional decline, exacerbations of chronic disease). Research and quality monitoring in home care is also enabled by standardised assessments. For instance, some of this paper's co-authors used RAI-MDS 2.0 data to compare the prevalence of resident depression, delirium, and behaviour problems and measure the effect of COVID-19 lockdowns in long-term care homes [49]. Likewise, standardised data in the home care sector can be used to measure the impact of the pandemic on patient health and wellbeing. In this study, home care patients who were assessed during the pandemic had worse health instability, communication impairment, and cognitive impairment compared to the previous year, but future studies should discern whether this was due to HCCSS prioritising standardised assessments for the most complex patients or whether this represented a real change in the health status of the home care population. In this paper, we highlighted the importance of caregiver well-being and strongly recommend that the sector utilise the caregiving questions already embedded within interRAI assessments to screen for individual caregiver needs as well as monitor levels of caregiver distress across the system. To pursue these studies, researchers will need to account for missing standardised assessments during the height of the pandemic. Initial and routine assessments completed during this time captured a slightly more complex population and were not representative of the whole home care population. Researchers will also need to consider adapting observation periods to account for overdue assessments. It may also be useful to analyse home care outcomes at the regional level since individual HCCSS may have adapted their assessment policies in different ways. By choosing methodological designs thoughtfully and through careful interpretation, we believe that information gained from home care assessments (despite the reduced volumes) can be used to support quality monitoring and policymaking. Since 2002, the use of interRAI assessments across Ontario home care has enabled censuslevel research and monitoring of the home care sector. However, as this paper demonstrates, the quality and completeness of the data relies on administrators and assessors to maintain the assessment standard, even during times of crisis. For readers, we highlight the following limitations of our analyses. First, this study used open-year data that CIHI defines as data received before the official annual submission deadline, which may change or be partially complete [21]. Second, this study applied a short list of exclusions such as linking referrals and assessments within a given timeframe that did not exactly match the CIHI methodology. Nevertheless, the changes reported in this study did not deviate beyond 1 or 2 percentage points of the CIHI report and overall conclusions did not change [21]. Third, when interpreting the subsample results, it is important to note that a patient's most recent interRAI assessment may not have been an accurate reflection of their health status at the time of service delivery, especially if they had missed or delayed assessments due to the COVID-19 pandemic. Fourth, the subsample results are applicable only to patients who received a standardised assessment; thus, these results should not be extrapolated to newly admitted or existing patients who did not receive a standardised assessment. Fifth, our findings were based on province-wide data that may not necessarily apply to individual HCCSS due to differences in assessment practices. HCCSS organisations that had a quicker recovery in the use of standardised assessments would be over-represented in the sub-sample. Sixth, we neither had access nor the means to analyse non-standardised assessment data that would have helped answer the question of whether apparent changes in acuity represented sampling bias or meaningful changes in health status (i.e., Table 1). Lastly, we did not have access to data on whether or when a patient's services were put on hold, so we could not exclude on-hold or waitlisted days to calculate service utilisation. Conclusion Across Ontario's publicly funded home care system, the first wave of the COVID-19 pandemic significantly disrupted patterns of home care admissions, discharges, standardised assessments, as well as receipt of personal support, occupational therapy, and physical therapy services. While the home care sector demonstrated its ability to pivot quickly and reverse many of these trends, service disruptions coupled with a pullback in standardised assessments placed home care patients at risk. These risks included placing the responsibility of bridging the care gap on patients and families and not adequately prioritising standardised assessments that are necessary for individual-and system-level monitoring. We conclude that the sector should prioritise both home care assessment and service delivery during a crisis to ensure persons who rely on these essential services are well-supported in the community.
v3-fos-license
2019-12-19T09:19:38.385Z
2018-06-28T00:00:00.000
210126349
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://xlescience.org/index.php/IJASIS/article/download/42/36", "pdf_hash": "d83a90259918f43eaaa1d4181f37d6ee1e7dec7e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41727", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "99760c8f41e0886d3afb169e34e07513360ed78c", "year": 2018 }
pes2o/s2orc
DIRECTIONAL THRESHOLDING ALGORITHM FOR GRAY SCALE IMAGE SEGMENTATION : The main aim of the image segmentation is to change the representation of the image so that the boundaries and objects in an image can be easily observed. In this study, a novel algorithm is proposed for the image segmentation using gray scale images. The codebook algorithm is used in the proposed approach for optimal multidirectional thresholding approach. The background and foreground pixel values are stored in the codebook. It uses standard deviations along the four directions to search the background and foreground pixels iteratively. The misclassification error and Jaccard index are used to measure the system efficiency. The mean of misclassification error is 95.80% with standard deviation of 1.91 and the mean of Jaccard index is 92.36% with standard deviation of 5.6. These measures shows the efficacy of the proposed system. I. INTRODUCTION Adaptive gamma correction and threshold segmentation based image enhancement is discussed in [1].The input images are segmented by using Otsu algorithm for multiple thresholding.The quality of the image is identified by average gray gradient method.Binary coded ant colony algorithm based image thresholding is discussed in [2].The parameters are initialized and threshold selection is made for ant colony algorithm.The thresholding based ant colony algorithm is used for classification. Recognition and segmentation of traffic sign in scene images is discussed in [3].Initially, the color distance is computed.Then, the binary based threshold method is used for segmentation.Dynamic thresholding based on binary particle swarm optimization algorithm is discussed in [4].The adaptive histogram method and thresholding segmentation is used.The thresholding value has the adequate number of thresholds with significant amount.Document image binarization for threshold correction and ruled-line extraction is discussed in [5].The input document image is preprocessed by converting the image into gray scale.Initial thresholding is used for the segmentation.The background of the image is determined.Then the threshold correction is also done to obtain binary image.Image segmentation based on wavelet domain binary partition is discussed in [6].The wavelet transform based image features are extracted.Binary partition tree method is used for segmentation. Threshold segmentation based road marking extraction is discussed in [7].The road marking search is reduced by using inverse perspective mapping.The road image is segmented by local adaptive threshold and canny edge detection method.The geometric features are used for extraction.Maximum mutual information based image thresholding is discussed in [8].Multi scale gradient multiplication transform method is used for the decomposition of input images.The gray level images are segmented by thresholding method. Capsule image segmentation using edge detection technique is discussed in [9].Initially, the capsule images are enhanced and segmented by using neural network.Then the borders are traced and analyzed.Image segmentation based multilevel thresholding algorithm is discussed in [10].The multilevel thresholding based on principal swarm optimization, kapur and Otsu methods are used for the segmentation. Traffic scenes based segmentation method for image sequence is discussed in [11].At first, the preprocessing step is made to remove high frequency noise in the image using low pass filtering.Then the image is analyzed by adaptive thresholding to obtain a binary image.The geometric descriptors are analyzed to identify the bright regions.Region distribution and edge detection based image segmentation is described in [12].The seed selection algorithm is used for the histogram equalization method.Edges are detected by multi threshold and single threshold.Region merging and texture elimination are used for the segmentation. Palm print image based segmentation based on Euclidean distance and adaptive threshold is discussed in [13].The input palm print images are preprocessed by median filter to remove noise.The region of interest area is extracted by Euclidean distance.Global thresholding method is used for segmentation.Handwritten image segmentation based single thresholding method is discussed in [14].The input handwritten images are segmented by thresholding method then the peak signal to noise ratio is used to predict the threshold value. A novel method for gray scale image segmentation using multidirectional thresholding method is discussed.The organization of paper is as follows: Methods and materials of the proposed system are explained in section 2. The experimental results and discussion of gray scale image segmentation is explained in section 3. Finally, conclusion is given in the last section. II. METHODS AND MATERIALS Segmentation is one of the major tasks in digital image processing and analysis.The purpose of segmentation is to divide an image into regions which are uniform and homogeneous with respect to some characteristics such as gray level or texture.Segmentation can be critical for subsequent analysis and scene description.The optimal thresholding is also used in empty vehicle redistribution [15] and wheel set online measurement [16].In this study, a novel algorithm for gray scale image segmentation is proposed.A codebook is organized in the proposed approach to store the foreground and background pixel values. Initially the code book does not contain any foreground and background pixels.The proposed algorithm starts with finding a gray value (G) within the image that has more occurrence than other gray values in that image.This gray value is inserted into the foreground list of codebook at first.Usually the edges are aligned along four directions D1, D2, D3 and D4 as shown in Figure 1.The standard deviation of a set of data points shows how much dissimilarity or distribution from the mean exists.A low standard deviation indicates closeness of the data points to the mean and a high value indicates the data points are spread over a large range of values.This property can be used to calculate an optimal threshold for segmentation of gray scale images. Fig. 1 Alignment of edges in four directions D1, D2, D3 and D4 To separate the pixels into foreground or background, the standard deviation along the four directions centered on G need to be computed.The largest standard deviation along a direction indicates the large intensity variation among them; hence the gray values other than G in that direction are inserted into the background list.The gray values in other directions are inserted into foreground list.As the foreground list is updated, this process is repeated for all the pixels in the foreground list.Then the threshold is calculated as the mean of the minimum and maximum intensity in the foreground list.To analyze the performance of the proposed thresholding technique, the segmentation accuracy is compared with other state of art techniques such as Otsu and iterative thresholding. III. RESULTS AND DISCUSSION The performance of the system is evaluated by 10 images in an image set having similar and dissimilar gray level histogram characteristics which is different from uni-model to multi-modal.The Jaccard index and misclassification error are computed as performance metrics. The misclassification error is the Cartesian number of sets given in an image pixel.Misclassification error is also used in adjustable entropy [17].The value of dissimilar images is 0 and the value of similar images is 100.The misclassification error is defined by, where I0, IT are resultant and gold standard image respectively.The similarity measure also made by Jaccard similarity coefficient, it is frequently used for binary data.The Jaccard index is also used in clustering coefficient [18].The Jaccard index is given by, where Ni is the gold standard image, Mi is the binary image and Ki is the area of overlap calculated between the binary and gold standard image.Figure 2 shows Grayscale image segmentation based on multidirectional thresholding approach The proposed system is compared with Tizhoosh method [19] and Otsu's [20] method in terms of misclassification error and jaccard index.Table 1 shows the misclassification error efficiency  of Tizhoosh, Otsu's and proposed system.From the table 1 it is observed that the proposed system has the higher misclassification efficiency compare to Tizhoosh and Otsu methods.The mean and standard deviation of Tizhoosh, Otsu and proposed method is shown in Table 2.The Jaccard index efficiency is given in Table 3. From the above Table 3 it is observed that the proposed system has the higher Jaccard efficiency compare to Tizhoosh and Otsu methods.The mean and standard deviation of Tizhoosh, Otsu and proposed method is shown in Table 4. IV. CONCLUSION A novel algorithm for grayscale image segmentation is presented in this study.The multidirectional thresholding method is used in the system for gray scale image segmentation.A codebook is used in this study in order to store the foreground and background pixel values.The misclassification error and Jaccard index are computed to analyze the system with other systems such as Tizhoosh and Otsu's method.The mean of misclassification error is 95.80% with standard deviation of 1.91 and the mean of Jaccard index is 92.36% with standard deviation of 5.6.Results show that the proposed approach produces higher efficiency with lower standard deviation in terms of misclassification error and Jaccard index that shows the accurate segmentation of the images. Fig. 3 Grayscale image segmentation based on multidirectional thresholding approach
v3-fos-license
2022-03-23T15:33:11.599Z
2022-03-20T00:00:00.000
247605847
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/14/6/3642/pdf?version=1647772061", "pdf_hash": "46f696e7c44f2b076097f0806e9eaca36d0f7891", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41729", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "sha1": "7c979fdb5f30757a30cbe6b8802f8bcc91a2bc5f", "year": 2022 }
pes2o/s2orc
Sustainable Marketing and Strategy The theme of this Special Issue (SI) is Sustainable Marketing and Strategy, as in the literature, we have seen growing evidence of how sustainability efforts are increasingly bringing significant benefits to enterprises [...] Introduction The theme of this Special Issue (SI) is Sustainable Marketing and Strategy, as in the literature, we have seen growing evidence of how sustainability efforts are increasingly bringing significant benefits to enterprises.This effect has been further witnessed following the outbreak of the COVID-19 pandemic, whereby we have seen an even more conscious consumer appear.The benefits include increased brand awareness, as those firms that stand out tend to catch attention by being keen to help and not hurt the environment as well as local communities.Consumers are thus opting for more environmentally and community-friendly firms, which is thus a means to achieving greater competitiveness.That the sustainability theme may be seen simply as a marketing tactic has also been debated.It surely makes strategic and marketing sense to be nice to the community, above what is required by law.Doing so in sincere and planned efforts will reap better returns in the age of the informed consumer.Greenwashing must be avoided at all costs, as firms will be punished for not being authentic in their social responsibility efforts.The articles published in this SI discuss how companies are managing the issues related to a new era whereby sustainability is a major goal for academics and practitioners alike. This SI intended to explore and be an outlet of discussion regarding the following themes: sustainable marketing, digital marketing for sustainable strategies, marketing and big data to shape customer behavior, green branding strategies, marketing and corporate social responsibility, marketing ethics, marketing and sustainable decision-making, and sustainable marketing policies for a green market. Discussion In this context, eight articles were accepted for publication which discuss the topics of determinants of marketing in globalization, employer branding as a marketing tool for strategic talent management, new ways of working and the analysis of employee engagement, the sustainability of the supply chain and purchasing policies, a new strategy to measure the behaviour of wine tourists, the opinion leaders' influence on the sustainable development of corporate-led consumer advice networks, and the influence of cross-listing on the relationship between financial leverage and R&D investment. Here is an overview of the articles published in the SI, including the focus, title, methodology of the article, and the keywords-showing how a diverse set of data (primary and secondary) and approaches (qualitative using words and quantitative using numbers) were followed: • [1]-Focus: franchising as a strategy for business expansion.Title: Determinants of Global Expansion: A Study on Food and Beverage Franchisors in Malaysia.Methodology: qualitative (including interviews and thematic analysis).Keywords: franchising; franchisor; global expansion; case study. Final Considerations We hope to have communicated a message whereby a new marketing and strategy approach is necessary.The world has become more united following recent trials and tribulations; hence, a new perspective has evolved, much to the surprise of observers, academics, and executives alike.As a result of recent hard times, consumer markets and business markets look toward new examples of ethical behaviour and seek new heroes.Often, the enemy appears unannounced and unexpectedly.In view of such occurrences, new levels need to be reached-of positive and humane human and corporate behaviour.Indeed, let us recall that corporations are run and led by people-who will be held responsible for irresponsible acts and consequences.History has shown us time and again that we as a species feel for each other and for the environment, especially when pushed to our limits and when we have had time to think beyond our immediate needs and necessities.The recent lockdowns and imposed restrictions on our liberty, due to the pandemic, have led us down that avenue, one of introverted thought and on how we can each make a difference and a step forward towards a better life.We all in the end are mortal and accept our mortality in times of hardship.Join the new marketing and strategy generation, which seeks to create more value and benefits and less waste.It is all a question of the community-both internal and external to the firm.We hope you enjoy our SI and pass on what you learn-on strategy and about sustainable and responsible growth-to others in your network. •[2]-Focus: internal human capital as a strategic asset.Title: Strategic Talent Management: The Impact of Employer Branding on the Affective Commitment of Employees. • [6]-Focus: development of a wine experience scale tested in Portugal (in Porto and on the island of Madeira).Title: Developing a Wine Experience Scale: A New Strategy to Measure Holistic Behaviour of Wine Tourists.Methodology: quantitative (including a questionnaire); keywords: scale validation; SEM; wine storytelling; wine tasting excitement; wine involvement; winescape.• [7]-Focus: a new corporate strategy -online community marketing and social media influencer marketing.Title: The Role of Opinion Leaders in the Sustainable Development of Corporate-Led Consumer Advice Networks: Evidence from a
v3-fos-license
2016-03-22T00:56:01.885Z
2015-01-13T00:00:00.000
17913605
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4907/6/1/225/pdf?version=1421160853", "pdf_hash": "0d2c3f522397fc02506fc8586799e1214ec48ec0", "pdf_src": "Crawler", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41730", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "0d2c3f522397fc02506fc8586799e1214ec48ec0", "year": 2015 }
pes2o/s2orc
Developing a Quality Assessment Index System for Scenic Forest Management: a Case Study from Xishan Mountain, Suburban Beijing The public's demand for more and better forest landscapes is increasing as scenic forest tours flourish in China, especially in the capital, Beijing. How to improve the quality of scenic forests has become one of the greatest concerns of urban foresters. Although numerous studies have focused on scenic forest management, to date, no reports have been found on developing a quality assessment index system for scenic forest assessment. In this study, a simple and scientific index system was established using an analytical hierarchy process (AHP) to quantitatively assess scenic forest quality. The index system is composed of four scales: individual tree landscape quality, in-forest landscape quality, near-view forest landscape quality and far-view landscape quality. The in-forest landscape quality was determined by horizontal and vertical stand structures, species composition and under-canopy landscape traits. Near-view forest landscape quality was mainly determined by patch characteristics, seasonal change, visibility, color change of patches and stand age class. To test the validity of our quality assessment index system, scenic forests in Xishan were used as a case study. The results show that near-view forest landscape was the most important scale for the overall quality of the scenic forest, according to the priorities of the criterion layer, and the second most important scale was far-view forest landscape. Seasonal change, patch color contrast, patch distribution and patch shape accounted for 52.2% of the total of 13 indices in the near-view forest landscape. The integrated quality of scenic forests in Xishan was at an average level, 226 and the in-forest landscape, near-view landscape and far-view landscape had below average quality. Introduction In general, scenic forests have high aesthetic values, especially the visual perception of beauty [1].Currently, scenic forests, as popular travel attractions, play an increasingly important role in the pursuit of leisure for Chinese people.For instance, the highest number of daily visitors to Xiangshan Park, the most well-known scenic forest in Beijing, reached 138,000 on October 28, 2012, making this the best record since 1989 when the first "Red Leaf Festival" was launched [2].Although the rush of tourism into scenic forests has triggered a series of ecological and social problems, it has also stimulated a new demand from the public for more and better forest landscapes [3].Since 1985, more than 2400 forest parks have been established in China.However, most scenic forests, especially those in suburban areas in northern China, are derived from planted forests, which are comprised of few tree species, high stock density with mass self-pruning and low rates of natural generation due to poor light conditions [4].In suburban Beijing, planted Pinus tabulaeformis Carr.and Platycladus orientalis (L.) Franco forests account for 39% of the total forest area, of which the young and middle-aged trees account for 87%.Hence, on the recommendation of urban foresters, the Chinese government reached a consensus that improving the quality of scenic forests should be given high priority [5].For decades in China, management techniques for scenic forests, such as refilling [6], mixing [7], tending [8][9][10][11] and modulation of stock density [3], have been intensively studied.These techniques have been integrated based on the relationships between one or several stand structural factors and scenic beauty evaluation values of in-forest landscapes or near-view forest landscapes.Most measures of these techniques come from commercial forest management, because valid, systematic and scientific criteria for assessing the quality of scenic forests have not yet been established.Therefore, to evaluate the quality and to identify the problems of scenic forests for further improvement, establishing a quality assessment index system has become an issue of some urgency. Landscape assessment addresses the quality of objective visual landscapes in terms of individual or social preferences for various landscape types, which is considered to be the key part in studies of landscape aesthetics and is the basis for landscape management, as well [12].Assessments are based on the assumption that the scenic beauty of the entire landscape can be explained in terms of the aggregation of the values of landscape components [13].For example, scenic forests, as a synthesis of structures, functions and aesthetics, have a large number of ecological, silvicultural and aesthetic components that affect their visual quality, such as tree height, crown size, species composition, tree density, color, crown patches, texture, patterns and shapes [14].The structured method of landscape assessment describes, classifies, analyzes and then evaluates these components [12]. A number of methods in landscape assessment have been devised since the 1960s; they started with descriptive inventories based on the experience of experts who gradually turned to the public, the best source of data in their opinion.Public preference methods, such as SBE (scenic beauty estimation), consist of two approaches: quantitative public preference surveys and landscape features.These approaches became very popular.Today, given the development of geographic information systems (GIS), there is a trend to carry out visual landscape research using computer technology and digital data [15].For the experts who assume that scenic quality is directly related to landscape diversity or variety, descriptive inventories are much simpler and more valid methods [5,[16][17][18], in contrast with those approaches that involve public preferences and require massive surveys and measurements [19][20][21], while quantitative holistic methods rely on high-resolution DEM or digital aerial images [13,22].In general, descriptive inventories, public preference models and quantitative holistic methods are the most popular methods of landscape assessment, and they greatly contribute to decision making and landscape management [23]. Although a number of studies have been carried out on scenic forests in China, a valid quality assessment system or a quality criterion has not been proposed to date.To grade the quality of scenic forests and help improve their visual quality, it is necessary to develop a scientific and systematic assessment index system.Based on research by Zhang [3], who analyzed scenic forest quality factors using principal components analysis (PCA), we have built a quality assessment index system using an analytical hierarchy process (AHP) through descriptive inventories, which are based on subjectively selected methods, but which can be applied objectively. Beijing and the Xishan Mountain Area Beijing is located in the North China Plain, between 115°25′-117°30′ E longitude and 39°28′-41°05′ N latitude.The Taihang Mountains are to the west, and the Yanshan Mountains are to the northeast.The total mountainous area is 10,400 km 2 , which accounts for 62% of the Beijing area.The mountains surround the central city and form an important natural buffer and recreational area. Xishan Mountain, which is located in western Beijing and belongs to the Taihang Mountain Range, has a total area of 3000 km 2 .It includes a series of hills, for example, the West Ling, Baihua, Miaofeng and Jiulong hills, which are of great interest to tourists.Our study plots are in the Xishan experimental forest farm, which are managed by the Beijing Municipal Bureau of Landscape and Forestry and are representative of scenic forests (Figure 1). Climate Beijing has a typical semi-humid continental monsoon climate with four distinct seasons.The annual average temperature is 10 °C-12 °C.In the coldest month, the temperature is between -7 °C and -4 °C, and in the hottest month, the temperature is 25 °C-36 °C.Extreme temperatures include a low of -27.4 °C and a high of 42 °C.The annual average temperature of the lower mountain area is 10 °C, gradually dropping to 8 °C towards the west and north.The annual frost-free period is 180-200 days.The average annual rainfall is approximately 630 mm and is unevenly distributed.Up to 75% of the annual precipitation falls in the summer and is often heavy in July and August, but precipitation falls sparingly in winter and spring from December to March.The annual evaporation, which is as high as 1800-2000 mm, is about three-times the annual precipitation and is closely related to location and elevation.In general, evaporation is greater in the mountains and at low elevations than in the plains and high altitude areas. Vegetation The zonal vegetation in Beijing is a mixture of pine and oak forests, and Pinus tabulaeformis and Quercus variabilis BI. are the dominant species.However, at present, up to 70% of the forest vegetation consists of plantations established during the 1950s and 1960s [9].Most plantations on Xishan Mountain consist of P. tabulaeformis, P. orientalis, Robinia pseudoacacia Linn., Q. variabilis, Cotinus coggygria Scop., Prunus davidiana Franch.and Prunus sibirica (L.) Lam.The dominant shrub species are Vitex negundo Linn., Myripnois dioica Bge., Deutzia grandiflora Bge. and Spiraea trilobata Linn. Methods In our study, a group of 25 experts in forestry, ecology and tour planning were invited to help in the decision making process.The experts were from seven different institutions (forestry universities, an academy of forestry and ecology research institutes), with forestry backgrounds that are directly or indirectly related to forest management activities.Our indices were derived from the literature, especially the research carried out by Zhang (2010) [3], which aimed at finding the intrinsic relationship of scenic forest quality factors by principal components analysis (PCA), based on measurement data from a total of 130 plots and a very large number of landscape photographs of the low mountain areas of Beijing.Given our expert survey, we modified the indices and constructed a quality index system using an analytical hierarchy process (AHP).In the end, we assessed the Xishan forest scene to test the validity of our quality index system. Analytical Hierarchy Process The AHP is a method of group decision-making.It provides a comprehensive and rational framework for structuring a decision problem [24] and is widely used in assessment system development [25][26][27][28][29].It represents and qualifies all relevant elements, relates them to the overall goal and then evaluates alternative solutions.To generate priorities, we decomposed the decision into the following four steps, using the method developed by Saaty (2008) [30]. Step 1: Decompose the problem into a hierarchy of goal, criterion and alternatives. Step 2: Evaluate the elements of the hierarchy by a pairwise comparison method. Step 3: Obtain a numerical priority for each element of the hierarchy, allowing diverse and often incompatible elements to be compared in a rational and consistent way. Step 4: Calculate the numerical priorities for each of the decision alternatives. Each of our 25 experts received a questionnaire based on the 41 indices (Table 1).Sufficient information was provided to understand the scenic forests, forest management and indices.Then, the experts evaluated the elements of the hierarchy by comparing them pairwise using a 9-point scale to find their impact on the element above them in the hierarchy.The matrix of pairwise comparisons represented the intensities of their preferences between individual pairs of alternatives.In the end, we received feedback from all 25 experts, from which the judgment matrices were constructed, the geometric mean used to represent the average ratio and the weights associated with the quality evaluation indices for scenic forests calculated. Determination of Index Contributions to Scenic Forest Quality To quantitatively obtain the contribution of each index for grading scenic forest quality, we divided each index into several degrees, referred to as sub-items, evaluated the general scenic forest quality out of a total score of 100 and then computed the score of each index or sub-item according to their priorities as follows: where B, C, D, E refer to the criterion (Level B), sub-criterion (Level C), index (Level D) and sub-item layer, respectively, PBi is the criterion Bi's priority in Level B under the overall goal, PCj is the sub-criterion Cj's priority in Level C under criterion Bi, PDk is the index Dk's priority in Level D under sub-criterion Cj, TDk is the contribution of each index to scenic forest quality, WEkl is the sub-item Ekl's weight in the sub-item layer under index Dk, SEkl is the score of sub-item Ekl under index Dk and 100 is the total score of scenic forest quality.The ranges of i, j, k and l are as follows: i = 1, …, 4, j = 1, …, 5, k = 1, …, 41 and l = 1, …, 5. Determination of Quality Grades According to the SEkl (the score of sub-item Ekl under index Dk), we calculated the maximum score, Max(SDk), and the minimum score, Min(SDk), of the index Dk, then we calculated the maximum score, Max(SBi), and the minimum score, Max(SBi), of the criterion Bi as follows: i = 1, 2, 3 and 4 refer to the individual tree landscape, in-forest landscape, near-view forest landscape and far-view forest landscape, respectively.k refers to the number of indices under the criterion Bi.We then classified the scenic forest quality into five grades: excellent (Grade 1), very good (Grade 2), average (Grade 3), below average (Grade 4) and failing (Grade 5).The score ranges of the five grades are as follows: Field Investigation To validate the quality assessment index system, we carried out a field investigation at the Xishan experimental forest farm.A representative sampling method was used.Twenty-two samples of individual trees, 121 samples of in-forest landscapes, 62 samples of near-view forest landscapes and 11 samples of far-view forest landscapes were investigated.This investigation was implemented in the typically planted P. tabulaeformis, P. orientalis pure forests and mixed coniferous and broadleaved forests, which are composed of P. tabulaeformis, Q. variabilis, P. orientalis, P. davidiana and R. pseudoacacia.The individual trees were the isolated or dominant trees in the scenic area.The in-forest sample areas were 20 m × 20 m for pure stands and 20 m × 30 m for plantations of mixed species.Stand information, including tree location, tree height, crown diameter, cover of understories, stock density and forest structures, was measured in detail.Simultaneously, following the tour route, we took a large number of landscape photographs of all of the plots at different visual scales (in-forest, near-view and far-view) to aid in index assignment and quality assessment. Quality Assessment Index System for AHP Following the advice of the experts, we obtained 41 indices (Table 1).To improve the understanding of visual perception from the point of view of the tourists, we constructed a quality assessment index system in multiple scales, which linked the physical structure of the scenic forests to their aesthetic quality.In the following, we propose and discuss four scales for our system, i.e., that of the individual tree landscape, in-forest landscape, near-view forest landscape and far-view forest landscape (Table 1).The individual tree landscape refers to an isolated tree or the dominant tree in a stand, which is usually the largest and the most eye-catching one.The quality of the individual tree landscape primarily regards the beauty of plant morphology, while in-forest landscape refers to the forest community and its under-canopy landscapes, where recreational activities mostly take place.The appearance of a forest landscape represents the entire scenery of the forest, which is largely characterized by patch patterns and shows the beauty of forest forms, lines, colors and textures.To be more specific about the appearance of the forest landscape, we defined appearance as near-view and far-view forest landscapes based on a perceived distance of 500 m.For a near-view forest landscape, the visible distance between an observation point and the forest view is less than 500 m, where the features of individual trees and patches can both be identified.In the far-view forest landscapes, beyond the immediate 500 m distance and up to 3000 m from an observation point, the details of individual trees become vague, and the visual impact of patches is dominant. Consistency Test To decide whether a matrix should be accepted both within a level and among levels, we carried out a consistency test, where a matrix is only accepted as a consistent one if the consistency ratio (CR) < 0.1.According to the results shown in Table 2, all CRs at each level were smaller than 0.1; as seen, the CR of Levels B-C was 0.0313; the CR of Levels A-B was 0.0484; and the CR of Levels A-D was 0.0417.In this case, the comparison matrix satisfied the consistency test, and therefore, the priorities were accepted. Importance Analysis of Indices According to the results of the pairwise comparisons by the experts on the scale of 1-9, we calculated the priority of each index and sub-item.As the results show, the priorities of the four-scale forest landscapes were 0.1257, 0.1855, 0.4051 and 0.2807, respectively, at Level A, which suggests that the effect of individual tree characteristics accounted for only 12.6% of the scenic forest quality, which, in turn, indicates that the experts paid much more attention to the near-view and the far-view landscapes (Table 1). The index of ornamental characteristics showed the greatest level of acceptance at the individual landscape level, followed by the index for crown shape (Table 1).Species of flowering trees, individual trees with a special crown shape, large size or ancient trees and a high ratio of stem length-to-tree height were important factors for individual landscape quality (Table 3). Species composition, vertical structure and under-canopy landscape were major factors affecting the in-forest landscape quality, while the effect of stem size and horizontal structure were relatively weak; their priorities accounted for only 16.4% and 10.4%, respectively (Table 1).Thus, mixed forests, especially uneven forests with diverse species, seemed more acceptable than pure and even forests (Table 4).Dense stands appeared to not be a good choice for scenic forests, given that visual distance was the most important index in the vertical structure of the sub-criterion layer.The priority of visual distances accounted for 50.9% of the index for vertical structure (Table 1).Visual distances larger than tree height were largely accepted (Table 4)., horizontal visual distance from the observation point to the visible furthest tree compared to its tree height; ratio of large tree D 16 , the ratio of the number of large trees (diameter > 20 cm) to the number of trees in the visual area. The priorities of seasonal change, patch color contrast, patch distribution and patch shape accounted for 52.2% of the total of 13 indices in the near-view forest landscape (Table 1).These results indicate that diverse tree species, plentiful color, strong patch color contrasts, randomly distributed color patches and irregular patch shapes were the most attractive factors in the near-view forest landscapes (Table 5)., mostly related to the tree heights of the adjacent patches: the ratio of the difference between tree heights of the adjacent patches and the average tree height of the forest stand. Compared with the near-view forest landscape, seasonal changes were considered to be the most important part of the index for the far-view forest landscapes, followed by color diversity and color contrast.The priorities of these three indices accounted for 59.7% of a total of eight indices in the far-view forest landscapes (Table 1).As observation distance increased, visual perception was mostly stimulated by color factors, especially by high color diversity and strong color contrast (Table 6). Quality Levels of Scenic Forests To set up criteria for scenic forest quality, we proposed quality levels of scenic forests based on our quality assessment index system.Given our priorities, we calculated the score of each sub-item (Equations ( 1) and ( 2)).We established the minimum and maximum scores to calculate the limits for each index of the four landscape scales (Equations ( 3) and ( 4)).Five quality levels, i.e., excellent, very good, average, below average and failing, were determined by dividing the maximum and minimum scores of each index into equal intervals (Equations ( 5)-( 9); Table 7).Once a scenic forest was determined to be excellent or very good, we accepted that the quality level was satisfied; in contrast, when a scenic forest was assessed as average, below average or failing, we suggested that improvements be made to the actual conditions. Quality Assessment of Scenic Forests in Xishan A case study of scenic forests in Xishan Mountain was undertaken using the newly-constructed quality assessment index system.Twenty-two samples of individual trees, 121 samples of in-forest landscapes, 62 samples of near-view forest landscapes and 11 samples of far-view forest landscapes were assessed.According to the results of our investigation of each sub-item, the indices were assigned values from which we obtained the quality scores of the four landscape scales.The results showed that the comprehensive assessment score of the Xishan scenic forests was 32.33, which suggests an average level and generally conforms to the actual situation (Table 8). Discussion To evaluate the quality of scenic forests for management improvement, we constructed a quality assessment index system using an analytical hierarchy process (AHP).To cope with the experience of tourists in their exploration of scenic forests, we defined the hierarchy in four forest landscape scales and determined their indices and weights on the basis of decisions made by experts.The Xishan scenic forests in Beijing were used as a case study to validate the quality assessment index system.We are of the opinion that the results are quite reasonable.By using this quality assessment system, it is easy to score the quality of scenic forests, grade their level and then help decide whether and where improvements are needed. The process of developing our quality assessment index system raised several key questions.The first was the selection of indices.Although a large number of components that impact the quality of scenic forests have been intensively studied [5,31], it seemed rational to select those that complemented our goal.We followed two principles.The first principle required that the components could be compiled and were directly related to the quality of scenic forests, and the second principle insisted that quality assessment indices should be helpful for practical applications in aesthetic improvement and forest management.Therefore, indices relevant to tree density, distribution and the cover of shrubs and herbs, which could guide adjustments, were our priorities.We also imported indices from landscape ecology, such as a patch network, to present large-scale forest landscapes.Considering the complicated factors that affect the long-distant forest landscape, we used 13 indices for the near-view forest landscape and eight for the far-view landscape, based on Zhang's [3] PCA results of scenic forest quality factors.The indices were more than the number of elements that Saaty [30] recommended for pairwise comparison.Therefore, we will try to further screen the indices and modify our index system in the next step of our research. The scale of the scenic forest landscape was our second concern.A number of studies have focused on one specific forest landscape scale, such as that of individual trees [32] or stands [33][34][35].However, tourists' recreational activities may affect their perceptions of scenic beauty [36,37].According to Zhang, Chen and Dong [3,33,34], factors that impact visual quality will change as long as the landscape scales change.Therefore, to improve the presentation of the intrinsic beauty of forests, we took into account the human experience of exploration, as well as the scale of changing views.A multi-scale quality assessment index system, including four scales, i.e., that of individual trees, in-forest landscape, near-view forest landscape and far-view forest landscape, was developed from the point of view of tourists. We also tried to determine whether it was the "visual quality" or the "quality" of the scenic forests that attracted tourists.Compared with multi-use forests, scenic forests are more specialized for sightseeing and recreational purposes.In our case, Beijing scenic forests largely provide for the public's recreational objectives.Hence, their aesthetics and recreational opportunities were our greatest consideration.It is widely accepted that visual perception is most important for the experience of visitors.In addition to physical and silvicultural criteria, most of our indices concerned visual beauty. Taking into account all of our concerns, the results of the AHP appear to be reasonable, and all of our experts provided positive feedback regarding our hierarchy.The indices of our four scales adequately represented the entire set of characteristics of the scenic forest landscape and were parallel to the point of view of tourists.Given the priorities of the four scales of forest landscapes, the near-view and far-view forest landscapes attracted more attention than individual trees and the in-forest landscape.Based on our pairwise comparisons, the dominant factors that affected the quality of scenic forests also showed differences among these four scales.Ornamental characteristics, species composition and seasonal change were, in order, the factors of the four scales that most affected this quality, which suggests that the scaled hierarchy explained the scenic forest quality quite well. Our assessment results, which yielded a comprehensive quality score of 32.33 for the Xishan scenic forests, suggest an average level of scenic quality.In line with the results of our field investigation and assessment, we discussed the problems of the Xishan scenic forests.The first problem was that most of the Xishan forests are artificial and were planted a few decades ago.It was hard to find a good individual tree landscape, owing to the lack of ancient and large trees.The second problem was that the low score of the in-forest landscape was caused primarily by the simple species composition and unclear vertical structures.The common species composition of pine, cypress and pagoda trees made the in-forest landscape flat and dull, while the heavy cover of shrubs blocked the sight lines under the canopy, making under-canopy spaces messy and difficult to enter.The third problem concerned the near-view forest landscape, where the stems and tree shapes could barely be identified and where patches showed little variation due to their simple species composition and distribution.Finally, the far-view forest landscape had similar patch problems.The patches were mainly formed by a large number of dark conifer blocks and few color tree fragments, which presented weak color contrasts that have little visual attraction for tourists. Conclusions and Prospects We divided our quality assessment index system of scenic forests into four scales, i.e., that of individual trees, in-forest landscape, near-view forest landscape and far-view forest landscape, by employing an AHP method.The weights of the four scales were 0.1257, 0.1855, 0.4051 and 0.2807, respectively, which indicates that the experts paid more attention to the far-view aesthetic scenery than to the recreational experience deep in the forest. Based on the sub-item scores, we classified quality grades into five levels, referred to as excellent, very good, average, below average and failing.Scenic forests evaluated at the excellent and very good levels are acceptable at present standards, while those at the average, below average and failing levels need appropriate adjustments according to the actual situation, such as refilling, mixing, tending and modulating the stock density. After our investigation of the Xishan scenic forest, we assessed its quality using our new quality assessment index system.The overall quality score of 32.33 suggests an average level, which is close to acceptable.However, the quality of the in-forest landscape was below average.Our assessment results were similar to findings from studies by Wu [11], who conducted field investigations and a public survey on Xishan scenic forests to provide a reference for their quality improvement.Accordingly, we concluded that the simple species composition and ill-defined vertical structure were the main problems, suggesting that more ornamental trees would increase the variety of patches and add to seasonal changes.We also concluded that the undergrowth required more pruning to clear space beneath the canopy. In general, this research attempted to use an AHP method to construct a quality assessment index system in multiple scales to provide a valid and rational assessment of scenic forest quality and to support provisions for improvement.It was a general, expert-based contribution to forest management.For future research on scenic forests, we encourage the testing of our assessment system.Certainly, several aspects can be improved.In the first instance, attractive forest landscapes reflect sustainable forest ecosystems.It is rational to expand or modify the system of indices based on the stability and health of forest ecosystems according to local conditions.Secondly, our index system was based on the opinions of experts.Future research will validate the methodology using the perceptions of the general public regarding scenic aspects.Lastly, it should be emphasized that while any assessment index system can be convenient to grade the quality of scenic forests, the key point is still quality improvement.Presently, we face enormous challenges of a growing concern about environmental sustainability and diverse public demands of our forests.Therefore, it is necessary to construct a technological system for forest management, conservation and sustainable development. Figure 1 . Figure 1.Location of the study area (redrawn by the author from Google Maps). Table 1 . Hierarchy and priorities of indices for the assessment of scenic forest quality. Table 2 . Consistency index (CI) and consistency ratio (CR) within a level. Table 3 . Categories and scores of individual tree landscapes. Table 4 . Categories and scores of in-forest landscapes. * Visual distance D 13 Table 5 . Categories and scores of near-view forest landscapes. Table 6 . Categories and scores of far-view forest landscapes. Table 7 . Score range of the quality levels of scenic forests. Table 8 . Frequency of the grade levels of the Xishan scenic forests.
v3-fos-license
2017-07-30T07:47:53.204Z
2017-05-25T00:00:00.000
31016863
{ "extfieldsofstudy": [ "Geology", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/9/6/525/pdf?version=1495715149", "pdf_hash": "aa66ed3d7a5dfc9e268aa7c2c491a568dda45e82", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41733", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "aa66ed3d7a5dfc9e268aa7c2c491a568dda45e82", "year": 2017 }
pes2o/s2orc
Satellite Monitoring the Spatial-Temporal Dynamics of Desertification in Response to Climate Change and Human Activities across the Ordos Plateau , China The Ordos Plateau, a typical semi-arid area in northern China, has experienced severe wind erosion events that have stripped the agriculturally important finer fraction of the topsoil and caused dust events that often impact the air quality in northern China and the surrounding regions. Both climate change and human activities have been considered key factors in the desertification process. This study used multi-spectral Landsat Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+) and Operational Land Imager (OLI) remote sensing data collected in 2000, 2006, 2010 and 2015 to generate a temporal series of the modified soil-adjusted vegetation index (MSAVI), bare soil index (BSI) and albedo products in the Ordos Plateau. Based on these satellite products and the decision tree method, we quantitatively assessed the desertification status over the past 15 years since 2000. Furthermore, a quantitative method was used to assess the roles of driving forces in desertification dynamics using net primary productivity (NPP) as a commensurable indicator. The results showed that the area of non-desertification land increased from 6647 km2 in 2000 to 15,961 km2 in 2015, while the area of severe desertification land decreased from 16,161 km2 in 2000 to 8,331 km2 in 2015. During the period 2006–2015, the effect of human activities, especially the ecological recovery projects implemented in northern China, was the main cause of desertification reversion in this region. Therefore, ecological recovery projects are still required to promote harmonious development between nature and human society in ecologically fragile regions like the Ordos Plateau. Introduction Desertification can be defined as land degradation in the arid, semi-arid and dry sub-humid areas resulting from climate change and human activities [1].Desertification directly affects the wellbeing of over 250 million people and puts at risk the livelihoods of over one billion people around the world [2,3], making it one of the most serious environmental issues [1,4].China is one of the most seriously affected countries with respect to desertification, especially in the northwestern part, where the desertification process has accelerated the loss of the agriculturally important fine fraction of the topsoil by wind erosion and has generated frequent dust storms [5,6].The spread of desertification has caused economic losses and reduction of biodiversity, and it is detrimental to human health. Quantitative measurement of diagnostic indicators of desertification and analysis of the driving factors are crucial for the control and rehabilitation of desertification [6][7][8]. The Ordos Plateau lies in the semi-arid farm-pastoral region of North-Central China.This area has long been affected by desertification under the influence of both poor climate conditions and human intervention.To improve the ecological environment of northern China, the Chinese government implemented the Grain for Green Program and the Return Grazing to Grass Program, among other ecological engineering projects that have proven beneficial to desertification reversion [9].However, little is known about the quantitative relationship between the desertification trend and its driving factors, including climate change and human activities, which may hinder the development of rehabilitation strategies.Therefore, effective monitoring of the desertification status and a thorough understanding of the relative roles of the driving factors in the desertification process are fundamental for the control of desertification. Field monitoring can provide useful information about the desertification process; however, its application is often limited due to its low spatial coverage.By comparison, remote sensing is a time-and cost-efficient way to extract explicit information at various temporal and spatial scales, and it has become a valuable tool for monitoring environmental change [10][11][12].Multi-spectral satellite instruments such as Landsat have been acquiring imagery over the Earth since the 1970s, which makes them a potential tool for long-term continuous monitoring of desertification [12][13][14][15].Temporal variations in vegetation directly represent the dynamics of biomass in arid and semi-arid ecosystems.Additionally, variations in vegetation are easy to interpret from satellite images.Thus, image-based vegetation indices are widely used as indicators in the assessment of desertification [16,17].Particularly, the fraction of green vegetation cover, which can be calculated by the normalized difference vegetation index (NDVI), is now one of the two global indicators required for reporting by governments, according to the United Nations Convention to Combat Desertification (UNCCD) [18].Landsat and similar satellite instruments can be used to measure and map NDVI, but their spectral bands are less suited for measuring both dry vegetation and bare soil components [19] which are more common surface components in dryland regions such as the Ordos Plateau.Thus, monitoring or assessing desertification using NDVI time series alone in dryland regions is problematic [20].Researchers have also tried to use indices that are sensitive to the non-vegetation components exposed in soils and rocks, such as the bare soil index (BSI) [21] and the grain size index (GSI), in the assessment of desertification [22].Other researchers have tried to incorporate these non-vegetated components into green vegetation indices, such as the soil-adjusted vegetation index (SAVI) and the modified SAVI (MSAVI) [16,17].Land surface albedo is also a potential indicator for bare ground [23,24].The combinations of these indices and/or the original reflectance data can be used as input for statistically modeling desertification status using various methods, including supervised classification [25], support vector machine [26] and decision tree [23,27] methods. Although the dynamics of desertification status can be monitored and assessed using remote sensing images and assessment models, the driving forces in the desertification process are still being disputed [28,29].Many studies have suggested that both climate change and human activities play important roles in the process of desertification [30,31].However, distinguishing human-induced from climate-induced factors, or determining which of them is the primary cause during a certain period, is still a challenging task due to the complex mechanism of desertification.To achieve this goal, some studies have used vegetation indices such as NDVI and net primary productivity (NPP) to distinguish human-induced degradation or desertification from climate-induced change by comparing the potential and actual vegetation status [17,32,33].Rain use efficiency (RUE), calculated as the ratio of NPP or NDVI to rainfall, has proven useful to relate the desertification process to its potential driving forces [16,34].Residual trends (RESTREND) is another potential method, where a negative trend in residual values (the difference between observed and predicted NDVI based on rainfall) can be regarded as a sign of human-induced desertification [33,35].Although these methods can identify the regions experiencing human-induced desertification, they cannot be used to quantitatively assess the relative roles of climate change and human activities in desertification dynamics.Similar in essence to the RESTREND model, previous studies have shown that human appropriation NPP (HANPP) can be used to represent the effect of human activities on the ecosystem [36][37][38][39].In their methods, the HANPP was defined as the difference between the climate-driven potential NPP (PNPP) and actual NPP (ANPP) simulated using both climate and remote sensing data.Based on this concept, desertification dynamics can be linked to their driving forces, with ANPP and HANPP denoting the impacts of climate change and human intervention, respectively [8,9,40].This method provides a potential way to quantitatively assess the relative roles of climate change and human activities in the desertification dynamics. The objectives of this study were: (1) to assess the desertification status from 2000 to 2015 based on indicators retrieved from Landsat data and a decision tree model; and (2) to quantitatively analyze the relative roles of climate change and human activities in the reversion and expansion process of desertification using NPP as the indicator. Study Area The Ordos Plateau (hereafter referred to as Ordos for short) is located in the southwest of the Inner Mongolia Autonomous Region (IMAR), China, ranging from 37 • 41 -40 • 51 N to 106 • 42 -111 • 31 E (Figure 1).It includes seven banners (county-level administrative divisions in the IMAR) and one urban district, covering an area of 86,882 km 2 , with elevation varying from 1000 to 1500 m above sea level.Ordos has a typical semi-arid climate with a mean annual temperature of 5.3-8.7 • C and an annual sunshine duration of 2716-3194 h.The annual average precipitation ranges from 450 mm in the southeast to 150 mm in the northwest [41].Moreover, most of the rainfall occurs from July to September. The major parts of this area are the Kubuqi Desert on the northern margin of Ordos and the Mu Us sandy land, located in southeastern Ordos.The main land cover/use type is grassland, as shown in Figure 1, and the dominant plant species is Artemisia ordosica.The main soil types in this area include aeolian sandy soil, chestnut soil and brown soil. Remote Sens. 2017, 9, 525 3 of 20 dynamics.Similar in essence to the RESTREND model, previous studies have shown that human appropriation NPP (HANPP) can be used to represent the effect of human activities on the ecosystem [36][37][38][39].In their methods, the HANPP was defined as the difference between the climate-driven potential NPP (PNPP) and actual NPP (ANPP) simulated using both climate and remote sensing data.Based on this concept, desertification dynamics can be linked to their driving forces, with ANPP and HANPP denoting the impacts of climate change and human intervention, respectively [8,9,40].This method provides a potential way to quantitatively assess the relative roles of climate change and human activities in the desertification dynamics. The objectives of this study were: (1) to assess the desertification status from 2000 to 2015 based on indicators retrieved from Landsat data and a decision tree model; and (2) to quantitatively analyze the relative roles of climate change and human activities in the reversion and expansion process of desertification using NPP as the indicator. Study Area The Ordos Plateau (hereafter referred to as Ordos for short) is located in the southwest of the Inner Mongolia Autonomous Region (IMAR), China, ranging from 37°41′-40°51′N to 106°42′-111°31′E (Figure 1).It includes seven banners (county-level administrative divisions in the IMAR) and one urban district, covering an area of 86,882 km 2 , with elevation varying from 1000 to 1500 m above sea level.Ordos has a typical semi-arid climate with a mean annual temperature of 5.3-8.7 °C and an annual sunshine duration of 2716-3194 h.The annual average precipitation ranges from 450 mm in the southeast to 150 mm in the northwest [41].Moreover, most of the rainfall occurs from July to September. The major parts of this area are the Kubuqi Desert on the northern margin of Ordos and the Mu Us sandy land, located in southeastern Ordos.The main land cover/use type is grassland, as shown in Figure 1, and the dominant plant species is Artemisia ordosica.The main soil types in this area include aeolian sandy soil, chestnut soil and brown soil.Ordos is a typical farming-pastoral area where agriculture and animal husbandry play important roles in the local economy.The population of Ordos grew from 1.31 million in 2000 to 1.57 million in 2015, according to the data released by the local bureau of statistics.This increase in population has led to irrational use of land, such as extensive reclamation and over-grazing, which has exacerbated land degradation and the expansion of desertification. Data Sources and Preprocessing Landsat-5 Thematic Mapper (TM), Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Landsat-8 Operational Land Imager (OLI) images were used in this study.Landsat TM/ETM+ level-1 terrain-corrected (L1T) data of 2000, 2006, and 2010 and Landsat OLI L1T data of 2015 were used to assess the desertification of Ordos.Seven scenes (path/row: 127/32, 127/33, 128/32, 128/33, 128/34, 129/32 and 129/33) of Landsat data were employed to build a seamless mosaic imagery.These Landsat multi-spectral data were acquired from the United States Geological Survey (USGS) (http://glovis.usgs.gov/).To minimize the influences of vegetation phenology, these time series remote sensing data were obtained from August to September, when vegetation reaches its maximum during the growing season.Acquiring cloud-free images that cover the whole study area within a given year was difficult due to the large geographic coverage.Therefore, some images from previous or subsequent years were used to generate a cloud-free mosaic.The Landsat L1T product processing includes radiometric, geometric and precision corrections using ground control chips as well as the use of a digital elevation model to correct parallax error due to local topographic relief [42][43][44].Therefore, we did not perform a geometric correction during the preprocessing.The original digital number (DN) values were firstly converted to the top of atmosphere (TOA) spectral radiance using the parameter information in the header file of Landsat TM/ETM+ [45] and Landsat OLI.The atmospheric correction was a critical pre-processing step in order to calculate the indicators for desertification assessment.In this study, atmospheric correction of the Landsat data was performed using the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) module embedded in the Environment for Visualizing Images (ENVI) software.The FLAASH module provides a unique solution for each image using the MODerate resolution atmospheric TRANsmission (MODTRAN4) radiative transfer code as the atmospheric radiation correction model with high precision.The mid-latitude summer atmospheric model was used to define the water vapor amount, and the rural aerosol model was selected to define the aerosol type.In addition, the "2-band (K-T)" option was selected for aerosol retrieval in the FLAASH module.After the atmospheric correction, the TOA radiance was converted to a surface reflectance value for each pixel. NPP is a key indicator for assessing the relative roles of climate change and human activities in desertification dynamics.The monthly NDVI product (MOD13A3) derived from the Moderate Resolution Imaging Spectrometer (MODIS) multi-spectral data and ground-based meteorological data from 2000 to 2015 was used to estimate NPP.Meteorological data, including monthly average temperature, cumulative precipitation and the total solar radiation from 18 meteorological stations in and around Ordos were collected from the China Meteorological Administration (http://data.cma.cn/).Spline interpolation was employed to generate meteorological grid data to drive the NPP model. Indicators for Desertification Assessment Changes in the land surface, such as vegetation coverage and soil component, are closely related to desertification dynamics.Three multi-spectral indices calculated from Landsat images that are related to vegetation coverage or soil component were used as indicators for desertification assessment. Several vegetation indices, such as the ratio vegetation index (RVI), normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI) and modified soil adjusted vegetation index (MSAVI) have been designed to quantitatively estimate the vegetation coverage or biomass.Among these indices, previous studies have shown that MSAVI is helpful because it can increase the dynamic range of the vegetation signal while minimizing the influence of background soil [46].Thus, in order to quantify vegetation coverage, MSAVI was selected and calculated from the near-infrared (NIR) and red (R) bands of Landsat multi-spectral data as the following formula: where ρ NIR and ρ R stand for spectral reflectance measurements acquired in the near-infrared and visible red bands, respectively.BSI can be used for mapping bare soil and distinguishing it from vegetation cover.Generally, BSI increases with increasing bare soil exposure of the land surface during the expansion of desertification.Moreover, as a normalized index that can combine both bare soil and vegetation indices, BSI can be used to assess the status of vegetation coverage, ranging from high vegetation to exposed soil condition [21,47].This alleviates the problem where the value of the vegetation index can be unreliable when there is sparse vegetation.Thus, BSI can be used to investigate land degradation and desertification processes.Here, BSI was calculated using the following equation [47]: where ρ B , ρ R , ρ NIR , and ρ SWIR stand for the spectral reflectance values of Band 1, Band 3, Band 4, and Band 5 for the Landsat TM/ETM+ data and Band 2, Band 4, Band 5, Band 6 for the Landsat OLI data, respectively.Land surface albedo can be defined as the instantaneous ratio of surface-reflected radiation flux to the incident radiation flux over the shortwave spectral domain [48].It is an important indicator to determine the change of surface conditions such as temperature and aridity/humidity in desertification dynamics.In general, replacing vegetation with bare soil causes an increase in land surface albedo, and this increase in albedo implies degradation or desertification [22,49,50].Therefore, albedo is also employed to assess the dynamics of desertification.The albedo can be calculated using a linear combination of the monochromatic reflectance values through the reflective bands [48]: where ρ B , ρ R , ρ NIR , ρ SWIR1 , and ρ SWIR2 stand for spectral reflectance values of Band 1, Band 3, Band 4, Band 5 and Band 7 for the Landsat TM/ETM+ and Band 2, Band 4, Band 5, Band 6 and Band 7 for the Landsat OLI, respectively. Desertification Dynamics Assessment The desertification of the land surface can be divided into five grades: non-desertification, slight, moderate, high and severe desertification, according to a frequently used grading system in China [51].In this study, the C5.0 decision tree (DT) model was employed to construct a classification model and to identify the grades of desertification by using the combination of indicators calculated using Landsat data.This model can classify large amounts of data according to the established division rules, and it allows for visualizing the tree structure rule sets, making it widely applicable [52]. A training set of records tagged with decision labels and containing a group of attribute values is firstly required to build a decision tree.In this study, over one hundred training points for each desertification grade were selected as an input sample data set according to the Atlas of Desertified and Sandified Land in China and visual interpretation of satellite images.By using this input sample data set, the information gain ratio criterion of the C5.0 model was engaged to determine the optimal thresholds of each indicator for separating different desertification grades [53,54].A set of tree-like rules was produced using this method.The results obtained by the training procedure, however, may perform poorly due to errors in the dataset.To reduce the errors during processing, a newly built decision tree has to be pruned back.In the C5.0 model, pruning starts automatically on leaf nodes and spreads upward to the whole tree using the information gain ratio.Further details on the pruning methods can be found elsewhere [54,55].Finally, the pruned hierarchically structured rules (Figure 2) were applied to assess the status of desertification. Calculation of Actual and Potential NPP The actual NPP (ANPP) was calculated using the Carnegie-Ames-Stanford Approach (CASA) model, a light-use efficiency model based on the resource-balance theory [56,57].This model has been proven feasible and accurate in simulating the NPP compared with observed values and has been widely used in China [58].Monthly NPP was obtained by the CASA model and then the annual NPP (gC m −2 yr −1 ) was calculated by the sum of monthly NPP within a year.In the CASA model, NPP is calculated using the following equation: where NPP(x, t) represents the NPP in the geographic coordinate of a given location x and time t, APAR(x, t) (MJ m −2 ) represents the canopy absorbed incident solar radiation integrated over a given period, and ε (gC MJ −1 ) is the actual light-use efficiency (LUE).APAR can be calculated using the following equation: where SOL(x, t) is the total solar radiation and FPAR(x, t) is the fraction of photosynthetically active radiation absorbed by vegetation, which can be determined by NDVI.The constant 0.5 represents the proportion of the total solar radiation available for vegetation.The LUE can be expressed as follows: where Tε1(x, t) and Tε2(x, t) denote the temperature stress coefficients, which reflect the reduction of LUE caused by the temperature factor, Wɛ(x, t) is the moisture stress coefficient, which indicates the reduction of LUE caused by the moisture factor, and ɛmax is the maximum LUE value under ideal conditions set as different values for various land types [59].A more detailed description of this algorithm can be found in [60]. To validate the estimation accuracy of the CASA model, we compared the simulated NPP values of 2006 to the MODIS annual NPP product (MOD17A3) in the study area.We selected the MODIS NPP product as the reference because it has been validated as consistent with field-observed values, and the product can capture the NPP pattern across various biomes and climate regimes [61].Figure 3 illustrates the comparison of the two datasets.The results show a high degree of consistency for the two datasets, with the R 2 exceeding 0.8 at a high significance level of p < 0.001.Yet, the CASA modeled data were slightly higher than the MODIS estimates, which can be partially explained by the fact that the MOD17 product tends to underestimate NPP in China [62,63].Nevertheless, we deemed it appropriate to utilize the CASA-simulated NPP in the time-series trend analysis. Calculation of Actual and Potential NPP The actual NPP (ANPP) was calculated using the Carnegie-Ames-Stanford Approach (CASA) model, a light-use efficiency model based on the resource-balance theory [56,57].This model has been proven feasible and accurate in simulating the NPP compared with observed values and has been widely used in China [58].Monthly NPP was obtained by the CASA model and then the annual NPP (gC m −2 yr −1 ) was calculated by the sum of monthly NPP within a year.In the CASA model, NPP is calculated using the following equation: where NPP(x, t) represents the NPP in the geographic coordinate of a given location x and time t, APAR(x, t) (MJ m −2 ) represents the canopy absorbed incident solar radiation integrated over a given period, and ε (gC MJ −1 ) is the actual light-use efficiency (LUE).APAR can be calculated using the following equation: where SOL(x, t) is the total solar radiation and FPAR(x, t) is the fraction of photosynthetically active radiation absorbed by vegetation, which can be determined by NDVI.The constant 0.5 represents the proportion of the total solar radiation available for vegetation.The LUE can be expressed as follows: where T ε1 (x, t) and T ε2 (x, t) denote the temperature stress coefficients, which reflect the reduction of LUE caused by the temperature factor, W ε (x, t) is the moisture stress coefficient, which indicates the reduction of LUE caused by the moisture factor, and ε max is the maximum LUE value under ideal conditions set as different values for various land types [59].A more detailed description of this algorithm can be found in [60]. To validate the estimation accuracy of the CASA model, we compared the simulated NPP values of 2006 to the MODIS annual NPP product (MOD17A3) in the study area.We selected the MODIS NPP product as the reference because it has been validated as consistent with field-observed values, and the product can capture the NPP pattern across various biomes and climate regimes [61].Figure 3 illustrates the comparison of the two datasets.The results show a high degree of consistency for the two datasets, with the R 2 exceeding 0.8 at a high significance level of p < 0.001.Yet, the CASA modeled data were slightly higher than the MODIS estimates, which can be partially explained by the fact that the MOD17 product tends to underestimate NPP in China [62,63].Nevertheless, we deemed it appropriate to utilize the CASA-simulated NPP in the time-series trend analysis. In this study, the potential NPP (PNPP with no human disturbance) was simulated using the Thornthwaite memorial model [64], which is expressed as follows: where NPP is the potential annual NPP (gC m −2 yr −1 ), r denotes the annual total precipitation (mm), and t is the annual average temperature ( • C). Quantitative Assessment of the Relative Roles of Climate Change and Human Activities in Desertification Dynamics As an important ecological indicator for estimating vegetation production, NPP reflects the complex interactions between climate change and human activities.We used yearly total NPP to assess the relative roles of climate change and human activities in the desertification dynamics.The effect of climate change on NPP was measured based on the temporal trend of PNPP.The effect of human activities on NPP was measured using the trend of human appropriation NPP (HANPP, i.e., PNPP-ANPP, denoting the NPP loss caused by human intervention).For analysis of the trends, the slopes (referred to as SP and SH, respectively) of a period from year t to year t + n were calculated by ordinary least squares regression using the following equation: where x is year, y is PNPP or HANPP, and n is the time span.For the areas identified as experiencing desertification reversion or expansion, the relative roles of climate change and human activities can be mapped on a grid scale by analyzing the SP and SH using the possible scenarios defined in Table 1. A positive SP or negative SH trend indicates that the effect of climate change or human activities during the period of year t to t + n is beneficial to vegetation growth and desertification reversion.Conversely, a negative SP or positive SH trend indicates that the effect of climate change or human Quantitative Assessment of the Relative Roles of Climate Change and Human Activities in Desertification Dynamics As an important ecological indicator for estimating vegetation production, NPP reflects the complex interactions between climate change and human activities.We used yearly total NPP to assess the relative roles of climate change and human activities in the desertification dynamics.The effect of climate change on NPP was measured based on the temporal trend of PNPP.The effect of human activities on NPP was measured using the trend of human appropriation NPP (HANPP, i.e., PNPP-ANPP, denoting the NPP loss caused by human intervention).For analysis of the trends, the slopes (referred to as S P and S H , respectively) of a period from year t to year t + n were calculated by ordinary least squares regression using the following equation: where x is year, y is PNPP or HANPP, and n is the time span.For the areas identified as experiencing desertification reversion or expansion, the relative roles of climate change and human activities can be mapped on a grid scale by analyzing the S P and S H using the possible scenarios defined in Table 1. A positive S P or negative S H trend indicates that the effect of climate change or human activities during the period of year t to t + n is beneficial to vegetation growth and desertification reversion.Conversely, a negative S P or positive S H trend indicates that the effect of climate change or human activities is beneficial to vegetation degradation and desertification expansion.In the situation of desertification reversion, if S P and S H are both positive (Scenario 1), the reversion can be attributed to climate change entirely.If S P and S H are both negative (Scenario 2), the reversion can be attributed to human activities entirely.If S P is positive and S H is negative (Scenario 3), the reversion is considered the combined result of climate change and human activities.In this situation, the individual contributions of the two factors can be calculated using the relative ratio of S P and S H .The scenarios for desertification expansion are also shown in Table 1. Dynamics of Desertification in Ordos between 2000 and 2015 The assessment results of desertification grades for the four periods are shown in Figure 4, and the resultant statistical data are shown in Table 2 (the numbers 1, 2, . . ., 7 in Figure 4 represent Hangjin, Otog, Otog Front, Dalad, Ejin Horo, Wushen and Jungar Banner, respectively, and the number 8 represents the Dongsheng District).From 2000 to 2006, the slight desertification area roughly doubled from 13,943 km 2 to 30,389 km 2 .The non-desertification area fell by 9.3% from 6647 km 2 to 6027 km 2 .Meanwhile, the moderate, high and severe desertification areas decreased by 36.5%, 13.7% and 27%, respectively.This significant decrease indicates that the study area experienced a recovery trend overall despite the fact that the total desertification area increased from 80,256 km 2 to 80,876 km 2 .The areas of moderate, high and severe desertification showed a diminishing trend, with their areas decreasing by 44.1%, 20.5% and 28.8%, respectively, during 2006 to 2010.Conversely, the areas of non-and slight desertification increased by 72.6% and 32.8% during the same period, which suggests a steady trend of recovery.During 2010-2015, the areas of non-, moderate and high desertification experienced an increasing trend, while the others decreased on different levels, as shown in Table 2.During 2000 to 2015, the area of non-desertification increased from 6647 km 2 to 15,961 km 2 , with an annual growth rate of 6%.Meanwhile, the area of severe desertification fell at an annual rate of 4.3%, from 16,161 km 2 to 8331 km 2 .The gradual increase of non-desertification area and decrease of severe desertification area suggests that the desertification status of Ordos has reversed in the past 15 years. Relative Roles of Climate Change and Human Activities in Desertification Reversion Based on the identified areas that experienced desertification reversion, the relative roles of climate change and human activities in various periods were mapped by analyzing the temporal trends of PNPP and NANPP according the scenarios defined in Table 1.The relative roles of these two factors showed obvious temporal and spatial heterogeneity.During the period 2000-2006, the areas that experienced reversion mainly caused by climate change (the relative role of climate change greater than 50%) account for 43.0% of the whole reversed area.These areas were mainly distributed in Dalad, Jungar, western Hangjin and western Otog Banner, as shown in Figure 6.Meanwhile, the areas that experienced reversion mainly caused by human activities (the relative role of human activities greater than 50%) account for 56.3% of the whole reversed area as shown in Figure 7, and they were mainly distributed in Otog Front, Wushen, and eastern Otog Banner.Proportionately, the relative roles of climate change and human activities in desertification reversion were not much different for the study area as a whole.From 2006 to 2010, however, in most areas, including Hangjin, Dalad, Jungar, Otog and Otog Front Banner, the reversion was mainly induced by human activities.These areas amounted to 87.3% of the whole region that experienced desertification reversion in Ordos.In the same period, only 11.4% of the total reversed land was mostly caused by climate change.From 2010 to 2015, human activities were also the major cause of reversion in central Hangjin, Dalad, Jungar, Ejin Horo, Wushen Banner and Dongsheng District.Few regions that had experienced desertification reversion mainly caused by climate change were distributed in northern Hangjin, western Otog and western Otog Front Banner.The reversed regions dominated by human activities and climate change amounted to 91.7% and 8.0% of the whole reversed land, respectively. Relative Roles of Climate Change and Human Activities in Desertification Reversion Based on the identified areas that experienced desertification reversion, the relative roles of climate change and human activities in various periods were mapped by analyzing the temporal trends of PNPP and NANPP according the scenarios defined in Table 1.The relative roles of these two factors showed obvious temporal and spatial heterogeneity.During the period 2000-2006, the areas that experienced reversion mainly caused by climate change (the relative role of climate change greater than 50%) account for 43.0% of the whole reversed area.These areas were mainly distributed in Dalad, Jungar, western Hangjin and western Otog Banner, as shown in Figure 6.Meanwhile, the areas that experienced reversion mainly caused by human activities (the relative role of human activities greater than 50%) account for 56.3% of the whole reversed area as shown in Figure 7, and they were mainly distributed in Otog Front, Wushen, and eastern Otog Banner.Proportionately, the relative roles of climate change and human activities in desertification reversion were not much different for the study area as a whole.From 2006 to 2010, however, in most areas, including Hangjin, Dalad, Jungar, Otog and Otog Front Banner, the reversion was mainly induced by human activities.These areas amounted to 87.3% of the whole region that experienced desertification reversion in Ordos.In the same period, only 11.4% of the total reversed land was mostly caused by climate change.From 2010 to 2015, human activities were also the major cause of reversion in central Hangjin, Dalad, Jungar, Ejin Horo, Wushen Banner and Dongsheng District.Few regions that had experienced desertification reversion mainly caused by climate change were distributed in northern Hangjin, western Otog and western Otog Front Banner.The reversed regions dominated by human activities and climate change amounted to 91.7% and 8.0% of the whole reversed land, respectively. Relative Roles of Climate Change and Human Activities in Desertification Expansion Similar to desertification reversion, the process of desertification expansion in various periods was spatially linked to its driving forces on a grid scale, as shown in Relative Roles of Climate Change and Human Activities in Desertification Expansion Similar to desertification reversion, the process of desertification expansion in various periods was spatially linked to its driving forces on a grid scale, as shown in Figure 8. Human activities and climate change both played large roles in the desertification expansion from 2000 to 2006.The regions that experienced desertification expansion dominated by human activities were mainly distributed in the common boundary of Hangjin and Otog Banner and accounted for 50.3% of the total degraded area.Meanwhile, the regions that experienced desertification expansion dominated by climate change were mainly distributed in western and central Otog Front Banner and accounted for 44.2% of the total degraded area.Nevertheless, from 2006 to 2010, climate change was the dominant factor controlling the desertification expansion process.In total, 74.6% of the degraded land was mainly caused by climate change, and these areas were mainly distributed in Hangjin and western Otog Relative Roles of Climate Change and Human Activities in Desertification Expansion Similar to desertification reversion, the process of desertification in various periods was spatially linked to its driving forces on a grid scale, as shown in Discussion Based on the three retrieved indicators (MSAVI, BSI and albedo), a decision tree model was built to assess the status of desertification in Ordos by referencing the published atlas of desertification monitoring along with visual interpretation.An accuracy check of the model was carried out using a group of random selected validation samples.The result showed satisfactory accuracy (91%), suggesting the suitability of the decision tree model in this study.Our assessment results revealed that even though some regions experienced desertification expansion, the desertification status in Ordos showed a recovery trend from 2000 to 2015, which is consistent with some studies conducted in arid and semi-arid areas of China [9]. Analyzing the driving mechanism of desertification carries theoretical and practical significance for the prevention and alleviation of desertification.The dynamics of desertification result from the comprehensive effects of climate change and human intervention.Distinguishing the human impacts on desertification dynamics from those of climate change is still a problem that needs to be solved [17].In this study, the dynamics of desertification were linked to these two kinds of driving forces by comparing the temporal trend of PNPP and HANPP.Based on the scenarios that describe all the combinations of desertification status changes and the trends of PNPP and HANPP as shown in Table 2, the relative roles of climate change and human activities were quantitatively assessed. Discussion Based on the three retrieved indicators (MSAVI, BSI and albedo), a decision tree model was built to assess the status of desertification in Ordos by referencing the published atlas of desertification monitoring along with visual interpretation.An accuracy check of the model was carried out using a group of random selected validation samples.The result showed satisfactory accuracy (91%), suggesting the suitability of the decision tree model in this study.Our assessment results revealed that even though some regions experienced desertification expansion, the desertification status in Ordos showed a recovery trend from 2000 to 2015, which is consistent with some studies conducted in arid and semi-arid areas of China [9]. Analyzing the driving mechanism of desertification carries theoretical and practical significance for the prevention and alleviation of desertification.The dynamics of desertification result from the comprehensive effects of climate change and human intervention.Distinguishing the human impacts on desertification dynamics from those of climate change is still a problem that needs to be solved [17].In this study, the dynamics of desertification were linked to these two kinds of driving forces by comparing the temporal trend of PNPP and HANPP.Based on the scenarios that describe all the combinations of desertification status changes and the trends of PNPP and HANPP as shown in Table 2, the relative roles of climate change and human activities were quantitatively assessed. As shown in Figures 6 and 8, the quantitative assessment result incorporated some "error" regions defined in Table 1.The "error" was defined as regions that experienced desertification expansion when both climate change and human activities were conducive to the reversion, and regions that experienced desertification reversion when both climate change and human activities were conducive to the expansion.These assessment errors were inevitable, because completely accurate classification or monitoring of the desertification status could not be achieved.Moreover, the uncertainty of the climate data and the interpolation method, along with the uncertainty of NPP modeling, may have led to errors in the assessment.The proportions of "error" regions to the total reversed desertification land over the three periods 2000-2006, 2006-2010 and 2010-2015 were 0.7%, 1.3% and 0.3%, respectively, as shown in Figure 7.During the same periods, the proportions of "error" regions to the total expanded desertification land were 5.5%, 3.5% and 1.9%, respectively.These "error" regions would not have affected the overall results for the study area, due to their tiny proportions. The effects of climate change, including the variations in annual precipitation and temperature, have a great influence on vegetation growth and further affect the expansion and reversion of desertification.Recently, studies have shown that since the 1980s, the temperature in northwestern China has undergone an increasing trend [65].In line with the warming trend, the precipitation has also experienced an increasing trend in northwestern China [66].In the current study, the annual temperature and precipitation of the entire study area was calculated by averaging the records from all 18 meteorological stations in and around Ordos.As shown in Figure 9, the average temperature and the overall average annual precipitation had an upward trend over the period 2000-2015, which is consistent with the previous reports.The warm, humid climate trend is beneficial to desertification reversion in the long run.However, this longtime trend cannot adequately explain the dynamics of desertification in the study area, due to the spatial and temporal heterogeneity of the climate factors.For instance, our results showed that, proportionally, human activity rather than climate change played the main role in desertification reversion during the period 2000 to 2015 for the study area as a whole.In fact, the effects of climate change in different regions and periods varied greatly, and these differences caused variations in the trend of PNPP, which was used to reflect the effect of climate change in desertification dynamics. As shown in Figures 6 and 8, the quantitative assessment result incorporated some "error" regions defined in Table 1.The "error" was defined as regions that experienced desertification expansion when both climate change and human activities were conducive to the reversion, and regions that experienced desertification reversion when both climate change and human activities were conducive to the expansion.These assessment errors were inevitable, because completely accurate classification or monitoring of the desertification status could not be achieved.Moreover, the uncertainty of the climate data and the interpolation method, along with the uncertainty of NPP modeling, may have led to errors in the assessment.The proportions of "error" regions to the total reversed desertification land over the three periods 2000-2006, 2006-2010 and 2010-2015 were 0.7%, 1.3% and 0.3%, respectively, as shown in Figure 7.During the same periods, the proportions of "error" regions to the total expanded desertification land were 5.5%, 3.5% and 1.9%, respectively.These "error" regions would not have affected the overall results for the study area, due to their tiny proportions. The effects of climate change, including the variations in annual precipitation and temperature, have a great influence on vegetation growth and further affect the expansion and reversion of desertification.Recently, studies have shown that since the 1980s, the temperature in northwestern China has undergone an increasing trend [65].In line with the warming trend, the precipitation has also experienced an increasing trend in northwestern China [66].In the current study, the annual temperature and precipitation of the entire study area was calculated by averaging the records from all 18 meteorological stations in and around Ordos.As shown in Figure 9, the average temperature and the overall average annual precipitation had an upward trend over the period 2000-2015, which is consistent with the previous reports.The warm, humid climate trend is beneficial to desertification reversion in the long run.However, this longtime trend cannot adequately explain the dynamics of desertification in the study area, due to the spatial and temporal heterogeneity of the climate factors.For instance, our results showed that, proportionally, human activity rather than climate change played the main role in desertification reversion during the period 2000 to 2015 for the study area as a whole.In fact, the effects of climate change in different regions and periods varied greatly, and these differences caused variations in the trend of PNPP, which was used to reflect the effect of climate change in desertification dynamics.Driven by population growth and economic development, the sprawl of built-up areas causes the decline of natural vegetation cover [67].In mineral-rich regions like Ordos, mining operations could also lead to vegetation degradation [68,69].Moreover, human activities such as over-grazing, over-reclamation and excessive cutting of woody plants induce the loss of ecosystem equilibrium and the expansion of desertification.Meanwhile, human activities such as reforestation of cultivated land and banning of grazing can be beneficial to the reversion of desertification.To mitigate land degradation and desertification, the Chinese government has implemented many ecosystem programs, including the Grain for Green Program, in operation since 1999, the Beijing and Tianjin Sand Source Control Project since 2002 and the Return Grazing to Grass Program since 2003.Under these ecosystem programs, the local government took some long-term measures, such as artificial Driven by population growth and economic development, the sprawl of built-up areas causes the decline of natural vegetation cover [67].In mineral-rich regions like Ordos, mining operations could also lead to vegetation degradation [68,69].Moreover, human activities such as over-grazing, over-reclamation and excessive cutting of woody plants induce the loss of ecosystem equilibrium and the expansion of desertification.Meanwhile, human activities such as reforestation of cultivated land and banning of grazing can be beneficial to the reversion of desertification.To mitigate land degradation and desertification, the Chinese government has implemented many ecosystem programs, including the Grain for Green Program, in operation since 1999, the Beijing and Tianjin Sand Source Control Project since 2002 and the Return Grazing to Grass Program since 2003.Under these ecosystem programs, the local government took some long-term measures, such as artificial afforestation and aerial sowing, to restore the ecological environment.Compared to artificial afforestation, aerial sowing is a mechanized afforestation approach that uses planes to spread tree or grass seeds.Closing land for afforestation is another strategy for the restoration of vegetation cover.In areas where natural sowing occurs, the lands are closed with the use of fences or barriers to exclude most forms of human exploitation for some years.Under favorable conditions, the ecological environment in the closed areas will improve during the closure period.According to the statistical yearbooks of Ordos, the area of annual artificial afforestation and aerial sowing (AAAS) in Ordos reached 209 km 2 in 2001 and remained approximately 50 km 2 per year in recent years, as shown in Figure 10.Moreover, the cumulative area of closed land for reforestation (CLR) surged from 55 km 2 in 2004 to 295 km 2 in 2014.The continuous implementation of ecological programs supports our finding that human activities were the dominant factor that led to desertification reversion during 2000 to 2015.Other studies have also shown the great impact of human factors on desertification reversion [8,9].Similar to the RESTREND model, in essence, the regions that experienced human-induced desertification expansion and reversion were identified by analyzing the trends of HANPP.However, the validation of the results was difficult, and field investigation and high resolution remote sensing data were needed [9,17].In this study, one example site (the red dot marked E1 in Figure 1) that underwent human-induced desertification reversion and another site (the red dot marked E2 in Figure 1) that underwent human-induced desertification expansion were used to validate our results. Figure 11 shows the human-induced desertification reversion around Qixing Lake, located in the north of the Kubuqi Desert, identified by our method and interpreted by comparing an Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image acquired in 2010 and the Chinese Gaofen-2 (GF-2 a high-resolution optical earth observation satellite launched in August 2014) image acquired in 2015.To learn more about the status of desertification in the study area, a field campaign was conducted in July 2016.We found that the reversion in this area was due to the planting of local cold-arid-alkaline-resistant vegetation conducted by the Elion Group, a leading Chinese company specialized in land remediation and ecological rehabilitation.Bordered by the Yellow River to the west, north and east, the Kubuqi is rich in groundwater: despite the surface desolation, the ground is comparatively moist just a few feet below the surface.Thus, the indigenous vegetation, such as poplars and sand willows, can survive and thrive.As shown in Figure 12, willows and grass were planted using underground water pumped from wells for irrigation.The afforestation led to vegetation reversion, and this was detected by our model, as shown in Figure 11.By means of its ecosystem restoration model, Elion has turned more than 11,000 km 2 of degraded land into Similar to the RESTREND model, in essence, the regions that experienced human-induced desertification expansion and reversion were identified by analyzing the trends of HANPP.However, the validation of the results was difficult, and field investigation and high resolution remote sensing data were needed [9,17].In this study, one example site (the red dot marked E1 in Figure 1) that underwent human-induced desertification reversion and another site (the red dot marked E2 in Figure 1) that underwent human-induced desertification expansion were used to validate our results. Figure 11 shows the human-induced desertification reversion around Qixing Lake, located in the north of the Kubuqi Desert, identified by our method and interpreted by comparing an Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image acquired in 2010 and the Chinese Gaofen-2 (GF-2 a high-resolution optical earth observation satellite launched in August 2014) image acquired in 2015.To learn more about the status of desertification in the study area, a field campaign was conducted in July 2016.We found that the reversion in this area was due to the planting of local cold-arid-alkaline-resistant vegetation conducted by the Elion Group, a leading Chinese company specialized in land remediation and ecological rehabilitation.Bordered by the Yellow River to the west, north and east, the Kubuqi is rich in groundwater: despite the surface desolation, the ground is comparatively moist just a few feet below the surface.Thus, the indigenous vegetation, such as poplars and sand willows, can survive and thrive.As shown in Figure 12, willows and grass were planted using underground water pumped from wells for irrigation.The afforestation led to vegetation reversion, and this was detected by our model, as shown in Figure 11.By means of its ecosystem restoration model, Elion has turned more than 11,000 km 2 of degraded land into productive land.The group tried to create a path of sustainable development in the Kubuqi desert that combines ecology, livelihood and economy.However, the sustainability of the restoration model in Kubuqi is still questionable, taking into account the fact that the underground water level has fallen steady in recent decades, according to interviews with local people.Further studies are still needed to achieve an equilibrium between afforestation and the available water supply and to ensure an ecologically sustainable relationship between nature and human society. Remote Sens. 2017, 9, 525 15 of 20 productive land.The group tried to create a path of sustainable development in the Kubuqi desert that combines ecology, livelihood and economy.However, the sustainability of the restoration model in Kubuqi is still questionable, taking into account the fact that the underground water level has fallen steady in recent decades, according to interviews with local people.Further studies are still needed to achieve an equilibrium between afforestation and the available water supply and to ensure an ecologically sustainable relationship between nature and human society.13, the dynamics of vegetation on the two sides of the fence line varied greatly.Outside of the fence line, the natural vegetation shows a recovery trend without human intervention.Inside the fence line, however, the vegetation shows a degradation trend as the result of over-grazing by livestock such as goats and sheep.This can be clearly seen in Figure 14.Apparently, our results well identified the effects of human activities in desertification expansion.productive land.The group tried to create a path of sustainable development in the Kubuqi desert that combines ecology, livelihood and economy.However, the sustainability of the restoration model in Kubuqi is still questionable, taking into account the fact that the underground water level has fallen steady in recent decades, according to interviews with local people.Further studies are still needed to achieve an equilibrium between afforestation and the available water supply and to ensure an ecologically sustainable relationship between nature and human society.13, the dynamics of vegetation on the two sides of the fence line varied greatly.Outside of the fence line, the natural vegetation shows a recovery trend without human intervention.Inside the fence line, however, the vegetation shows a degradation trend as the result of over-grazing by livestock such as goats and sheep.This can be clearly seen in Figure 14.Apparently, our results well identified the effects of human activities in desertification expansion.As shown in Figure 13, the dynamics of vegetation on the two sides of the fence line varied greatly.Outside of the fence line, the natural vegetation shows a recovery trend without human intervention.Inside the fence line, however, the vegetation shows a degradation trend as the result of over-grazing by livestock such as goats and sheep.This can be clearly seen in Figure 14.Apparently, our results well identified the effects of human activities in desertification expansion. Conclusions In this study, a satellite-based method integrated with the decision tree model was employed to monitor the dynamics of desertification degree and coverage.Furthermore, PNPP and HANPP estimated from the CASA model were used to measure the effects of climate change and human activities in the desertification process.This method can effectively identify areas that have undergone desertification reversion and expansion, and it can quantitatively assess the relative roles of the driving factors at a regional scale. Our results based on Landsat images presented a recovery trend over the period 2000-2015 in Ordos, with the area of desertification land decreasing from 80,256 km 2 to 70,942 km 2 .Temporally, human activities, especially the implementation of ecological restoration programs, mainly caused the desertification reversion from 2000 to 2015 for the study area as a whole.Climate change (including variations in precipitation and temperature) was the main factor causing the desertification expansion during 2006 to 2010.However, over the periods 2000-2006 and 2010-2015, the effects of climate change and human activities played roughly equal roles in the desertification expansion.Spatially, climate change and human activities were both responsible for the dynamics of desertification, but their relationships with desertification reversion and expansion showed a significant spatial heterogeneity.Therefore, different scenarios are required according to the local situations in order to mitigate desertification and realize the sustainable development. This study indicates that climate change is a key factor influencing the dynamics of desertification, and human activities can help mitigate the desertification locally in arid and semiarid regions such as the Ordos Plateau.For example, the afforestation implemented around Qixing Lake in the north Kubuqi Desert can improve the local ecological environment.On the other hand, human activities such as stock breeding can aggravate the land degradation process, which policy- Conclusions In this study, a satellite-based method integrated with the decision tree model was employed to monitor the dynamics of desertification degree and coverage.Furthermore, PNPP and HANPP estimated from the CASA model were used to measure the effects of climate change and human activities in the desertification process.This method can effectively identify areas that have undergone desertification reversion and expansion, and it can quantitatively assess the relative roles of the driving factors at a regional scale. Our results based on Landsat images presented a recovery trend over the period 2000-2015 in Ordos, with the area of desertification land decreasing from 80,256 km 2 to 70,942 km 2 .Temporally, human activities, especially the implementation of ecological restoration programs, mainly caused the desertification reversion from 2000 to 2015 for the study area as a whole.Climate change (including variations in precipitation and temperature) was the main factor causing the desertification expansion during 2006 to 2010.However, over the periods 2000-2006 and 2010-2015, the effects of climate change and human activities played roughly equal roles in the desertification expansion.Spatially, climate change and human activities were both responsible for the dynamics of desertification, but their relationships with desertification reversion and expansion showed a significant spatial heterogeneity.Therefore, different scenarios are required according to the local situations in order to mitigate desertification and realize the sustainable development. This study indicates that climate change is a key factor influencing the dynamics of desertification, and human activities can help mitigate the desertification locally in arid and semiarid regions such as the Ordos Plateau.For example, the afforestation implemented around Qixing Lake in the north Kubuqi Desert can improve the local ecological environment.On the other hand, human activities such as stock breeding can aggravate the land degradation process, which policy- Conclusions In this study, a satellite-based method integrated with the decision tree model was employed to monitor the dynamics of desertification degree and coverage.Furthermore, PNPP and HANPP estimated from the CASA model were used to measure the effects of climate change and human activities in the desertification process.This method can effectively identify areas that have undergone desertification reversion and expansion, and it can quantitatively assess the relative roles of the driving factors at a regional scale. Our results based on Landsat images presented a recovery trend over the period 2000-2015 in Ordos, with the area of desertification land decreasing from 80,256 km 2 to 70,942 km 2 .Temporally, human activities, especially the implementation of ecological restoration programs, mainly caused the desertification reversion from 2000 to 2015 for the study area as a whole.Climate change (including variations in precipitation and temperature) was the main factor causing the desertification expansion during 2006 to 2010.However, over the periods 2000-2006 and 2010-2015, the effects of climate change and human activities played roughly equal roles in the desertification expansion.Spatially, climate change and human activities were both responsible for the dynamics of desertification, but their relationships with desertification reversion and expansion showed a significant spatial heterogeneity.Therefore, different scenarios are required according to the local situations in order to mitigate desertification and realize the sustainable development. This study indicates that climate change is a key factor influencing the dynamics of desertification, and human activities can help mitigate the desertification locally in arid and semi-arid regions such as the Ordos Plateau.For example, the afforestation implemented around Qixing Lake in the north Kubuqi Desert can improve the local ecological environment.On the other hand, human activities such as stock breeding can aggravate the land degradation process, which policy-makers must keep in mind when making regional ecological conservation decisions and plans.The method used in this study could shed light on the mechanism of desertification and could be used to further develop efficient measures to combat desertification in the arid and semi-arid regions. Figure 1 . Figure 1.(a) Geographic location of the study area; (b) Land cover/use of the study area acquired from GlobeLand30 dataset (http://www.globallandcover.com/).Red dots marked E1 and E2 in sub-figure (b) show the locations of satellite images used in Figures 11 and 13, respectively.Ordos is a typical farming-pastoral area where agriculture and animal husbandry play important roles in the local economy.The population of Ordos grew from 1.31 million in 2000 to 1.57 million in 2015, according to the data released by the local bureau of statistics.This increase in Figure 1 . Figure 1.(a) Geographic location of the study area; (b) Land cover/use of the study area acquired from GlobeLand30 dataset (http://www.globallandcover.com/).Red dots marked E1 and E2 in sub-figure (b) show the locations of satellite images used in Figure 11 and Figure 13, respectively. Figure 2 . Figure 2. Statistically developed decision tree for the assessment of desertification status.MSAVI: modified soil adjusted vegetation index; BSI: bare soil index. Figure 2 . Figure 2. Statistically developed decision tree for the assessment of desertification status.MSAVI: modified soil adjusted vegetation index; BSI: bare soil index. Figure 7 . Figure 7. Contributions of climate change and human activities in various periods for (a) desertification reversion and (b) expansion. Figure 8 . Human activities and climate change both played large roles in the desertification expansion from 2000 to 2006.The regions that experienced desertification expansion dominated by human activities were mainly distributed in the common boundary of Hangjin and Otog Banner and accounted for 50.3% of the total degraded area.Meanwhile, the regions that experienced desertification expansion dominated by climate change were mainly distributed in western and central Otog Front Banner and accounted for 44.2% of the total degraded area.Nevertheless, from 2006 to 2010, climate change was the dominant factor controlling the desertification expansion process.In total, 74.6% of the degraded land was mainly caused by climate change, and these areas were mainly distributed in Hangjin and western Otog Figure 7 . Figure 7. Contributions of climate change and human activities in various periods for (a) desertification reversion and (b) expansion. Figure 7 . Figure 7. Contributions of climate change and human activities in various periods for (a) desertification reversion and (b) expansion. Figure 8 . Human activities and climate change both played large roles in the desertification expansion from 2000 to 2006.The regions that experienced desertification expansion dominated by human activities were mainly distributed in the common boundary of Hangjin and Otog Banner and accounted for 50.3% of the total degraded area.Meanwhile, the regions that experienced desertification expansion dominated by climate change were mainly distributed in western and central Otog Front Banner and accounted for 44.2% of the total degraded area.Nevertheless, from 2006 to 2010, climate change was the dominant factor controlling the desertification expansion process.In total, 74.6% of the degraded land was mainly caused by climate change, and these areas were mainly distributed in Hangjin and western Otog Banner.Only 21.9% of the degraded land was controlled by human activities, and it was mainly concentrated in northwestern Otog Banner.During the period 2010 to 2015, climate change was the principal cause of desertification expansion in southern Hangjin, central Otog and Wushen Banner, whereas human activities dominated the expansion in western Hangjin, southwestern Otog, and western Otog Front Banner.The desertification expansion mainly induced by climate change and human activities during this period accounted for 49.1% and 49.0% of the total degraded land, respectively.Remote Sens. 2017, 9, 525 12 of 20 Banner.Only 21.9% of the degraded land was controlled by human activities, and it was mainly concentrated in northwestern Otog Banner.During the period 2010 to 2015, climate change was the principal cause of desertification expansion in southern Hangjin, central Otog and Wushen Banner, whereas human activities dominated the expansion in western Hangjin, southwestern Otog, and western Otog Front Banner.The desertification expansion mainly induced by climate change and human activities during this period accounted for 49.1% and 49.0% of the total degraded land, respectively. Figure 9 . Figure 9.The trends of (a) annual precipitation and (b) temperature in the study area from 2000 to 2015 using the averaged data acquired from meteorological stations. Figure 9 . Figure 9.The trends of (a) annual precipitation and (b) temperature in the study area from 2000 to 2015 using the averaged data acquired from meteorological stations. Remote Sens. 2017,9, 525 14 of 20 afforestation and aerial sowing, to restore the ecological environment.Compared to artificial afforestation, aerial sowing is a mechanized afforestation approach that uses planes to spread tree or grass seeds.Closing land for afforestation is another strategy for the restoration of vegetation cover.In areas where natural sowing occurs, the lands are closed with the use of fences or barriers to exclude most forms of human exploitation for some years.Under favorable conditions, the ecological environment in the closed areas will improve during the closure period.According to the statistical yearbooks of Ordos, the area of annual artificial afforestation and aerial sowing (AAAS) in Ordos reached 209 km 2 in 2001 and remained approximately 50 km 2 per year in recent years, as shown in Figure10.Moreover, the cumulative area of closed land for reforestation (CLR) surged from 55 km 2 in 2004 to 295 km 2 in 2014.The continuous implementation of ecological programs supports our finding that human activities were the dominant factor that led to desertification reversion during 2000 to 2015.Other studies have also shown the great impact of human factors on desertification reversion[8,9]. Figure 10 . Figure 10.The area of closed land for reforestation (CLR) and the area of annual artificial afforestation and air seeding (AAAS) in Ordos. Figure 10 . Figure 10.The area of closed land for reforestation (CLR) and the area of annual artificial afforestation and air seeding (AAAS) in Ordos. Figure 11 . Figure 11.Human-induced desertification reversion and its validation.(a) Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image acquired in 2010; (b) Gaofen-2 (GF-2) image acquired in 2015; (c) the areas identified in this study as experiencing human-induced desertification reversion (brown color) during 2010 to 2015.The red dots marked E3 and E4 show the locations of the photos used in Figure 12a,b. Figure 12 . Figure 12.Photos showing the desertification reversion induced by afforestation (a) and grass planting (b).Their locations are shown in Figure 11. Figure 13 Figure 13 shows the desertification expansion during 2006 to 2015 caused by increasing human activities detected in this study and the validation by comparing the Google Earth image acquired in 2003 and the GF-2 image acquired in 2015.We hypothesized that the vegetation did not change much from 2003 to 2006, and we used the image of 2003 to represent the desertification status of 2006.As shown in Figure13, the dynamics of vegetation on the two sides of the fence line varied greatly.Outside of the fence line, the natural vegetation shows a recovery trend without human intervention.Inside the fence line, however, the vegetation shows a degradation trend as the result of over-grazing by livestock such as goats and sheep.This can be clearly seen in Figure14.Apparently, our results well identified the effects of human activities in desertification expansion. Figure 11 . Figure 11.Human-induced desertification reversion and its validation.(a) Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image acquired in 2010; (b) Gaofen-2 (GF-2) image acquired in 2015; (c) the areas identified in this study as experiencing human-induced desertification reversion (brown color) during 2010 to 2015.The red dots marked E3 and E4 show the locations of the photos used in Figure 12a,b. Figure 11 . Figure 11.Human-induced desertification reversion and its validation.(a) Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image acquired in 2010; (b) Gaofen-2 (GF-2) image acquired in 2015; (c) the areas identified in this study as experiencing human-induced desertification reversion (brown color) during 2010 to 2015.The red dots marked E3 and E4 show the locations of the photos used in Figure 12a,b. Figure 12 . Figure 12.Photos showing the desertification reversion induced by afforestation (a) and grass planting (b).Their locations are shown in Figure 11. Figure 13 Figure 13 shows the desertification expansion during 2006 to 2015 caused by increasing human activities detected in this study and the validation by comparing the Google Earth image acquired in 2003 and the GF-2 image acquired in 2015.We hypothesized that the vegetation did not change much from 2003 to 2006, and we used the image of 2003 to represent the desertification status of 2006.As shown in Figure13, the dynamics of vegetation on the two sides of the fence line varied greatly.Outside of the fence line, the natural vegetation shows a recovery trend without human intervention.Inside the fence line, however, the vegetation shows a degradation trend as the result of over-grazing by livestock such as goats and sheep.This can be clearly seen in Figure14.Apparently, our results well identified the effects of human activities in desertification expansion. Figure 12 . Figure 12.Photos showing the desertification reversion induced by afforestation (a) and grass planting (b).Their locations are shown in Figure 11. Figure 13 Figure 13 shows the desertification expansion during 2006 to 2015 caused by increasing human activities detected in this study and the validation by comparing the Google Earth image acquired in 2003 and the GF-2 image acquired in 2015.We hypothesized that the vegetation did not change much from 2003 to 2006, and we used the image of 2003 to represent the desertification status of 2006.As shown in Figure13, the dynamics of vegetation on the two sides of the fence line varied greatly.Outside of the fence line, the natural vegetation shows a recovery trend without human intervention.Inside the fence line, however, the vegetation shows a degradation trend as the result of over-grazing by livestock such as goats and sheep.This can be clearly seen in Figure14.Apparently, our results well identified the effects of human activities in desertification expansion. Figure 13 . Figure 13.Satellite images showing the desertification expansion caused by stock breeding in Otog Front Banner, Ordos.(a) Google Earth image acquired in 2003; (b) GF-2 image acquired in 2015; (c) the area of human-induced desertification (brown color) during 2006 to 2015.The red dots marked E5 and E6 show the locations of the photos used in Figure 14a,b. Figure 14 . Figure 14.Photos showing the desertification expansion caused by stock breeding on the northwest (a) and southwest (b) of the fence line.Their locations are shown in Figure 13. Figure 13 . 20 Figure 13 . Figure 13.Satellite images showing the desertification expansion caused by stock breeding in Otog Front Banner, Ordos.(a) Google Earth image acquired in 2003; (b) GF-2 image acquired in 2015; (c) the area of human-induced desertification (brown color) during 2006 to 2015.The red dots marked E5 and E6 show the locations of the photos used in Figure 14a,b. Figure 14 . Figure 14.Photos showing the desertification expansion caused by stock breeding on the northwest (a) and southwest (b) of the fence line.Their locations are shown in Figure 13. Figure 14 . Figure 14.Photos showing the desertification expansion caused by stock breeding on the northwest (a) and southwest (b) of the fence line.Their locations are shown in Figure 13. Table 1 . Scenarios for assessing the roles of climate change and human activities in desertification.S P : slope of potential NPP (PNPP); S H : slope of human appropriation NPP (HANPP).
v3-fos-license
2020-12-24T09:13:30.373Z
2020-12-20T00:00:00.000
234317793
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/cmu2.12055", "pdf_hash": "704d28c36bdf98462ffb5130afce90af532dee2a", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41736", "s2fieldsofstudy": [], "sha1": "bba24ae47a9da42fca7d5e0575179d0a7b2e3ff2", "year": 2020 }
pes2o/s2orc
Experimental validation of a three-dimensional modulation format for data transmission in RGB visible light communication systems Transmission of three-dimensional (3D) orthogonal frequency division multiplexing (OFDM) signals over red, green and blue (RGB) visible light communication (VLC) systems is proposed and experimentally validated. The novel 3D modulation format aims to overcome the different luminous response of RGB light-emitting diodes at the same driving current levels, that generates a correlation among the three RGB analog signals instead of considering each channel as independent from each other, as is in classical uniform wavelength division multiplexing transmissions. In the proposed scheme, real and imaginary parts of OFDM signals are sent through the red and blue channels of the VLC transmitter, respectively, while the third dimension of the OFDM symbol is transmitted through the green channel, according to a procedure that allows the maximisation of the Euclidean distance among the 3D constellation symbols. For the OFDM signal reconstruction and decoding at the receiver side, advanced digital signal processing for frame synchronisation and reconstruction of the 3D constellation diagram are implemented. Experimental results are included to validate the transmission of a 27.3 Mbps system through a VLC link of 1.5 m. INTRODUCTION Limited availability of spectrum, electromagnetic interference from electrical equipment and vulnerability to attacks are notable drawbacks of wireless radio networks that provide insufficient reliability and performance [1]. In this context, optical wireless communication (OWC) becomes a promising solution by using the unlicensed light spectrum, with light sources, to provide enough modulation bandwidth for Internetof-things (IoT) and Industry 4.0 applications, as well as for the 5G use cases [2]. Visible light communication (VLC) is a kind of OWC system designed for simultaneous illumination and data transmission, using light sources such as light-emitting diodes (LEDs) in the spectrum range around 400−-700 nm [3]. Transmission speeds higher than the current WiFi technology, and the possibility of being used in electromagnetic interference places, like airplanes and hospitals, are some of the advantages offered by VLC that has positioned it as a promising technology for the incoming 5G [4,5]. Both white-LEDs and arrays of red-green-blue (RGB) LEDs have been adopted in VLC implementations [6]. However, for high data transmission rates, RGB LEDs are preferred over white phosphor-based LEDs, because of the larger bandwidth offered for modulation [7], despite the difficulty involved in the generation of white lights from the RGB colours, as reported in [8]. Moreover, RGB LEDs also permit the design of wavelength division multiplexing (WDM) schemes, as well as the fulfillment of multiple-input multiple-output (MIMO) multi-user applications. It is important to mention that, in such scenario, the RGB bands used as dedicated WDM VLC channels may present different transmission performance, because of the different illuminance efficiency afforded by commercially available RGB LEDs. The coding and modulation schemes play important roles in this context. The Institute of Electrical and Electronics Engineers suggests the adoption of on-off keying (OOK), variable pulseposition modulation (VPPM) and colour-shift keying (CSK) modulation formats in short-range OWC using VLC [9]. Indeed, OOK and classical pulse width modulation (PWM) have been applied for dimming control in VLC systems using white or RGB LEDs [6]. Nevertheless, to provide spectral efficiency (SE) and to combat intersymbol interference (ISI), orthogonal frequency division multiplexing (OFDM) have been considered in VLC to explore the granularity of its multicarrier nature [10][11][12]. Undeniable results of OFDM-based VLC systems have been demonstrated aiming, among others, the channel frequency selectivity overcoming [2,7]. More recently, robust equalisation schemes and the design of robust receivers have been reported to overcome undesired transmission impacts such as inter-channel interference (ICI) [13,14]. However, the large fluctuation of OFDM signals that introduces non-linearities should be considered in VLC designs. Constantenvelope OFDM schemes can be employed to deal with this inconvenience, despite the inherit decrease of SE provided by those based on phase modulations [15]. The appropriation of the Hermitian symmetry to generate signals with real coefficients is another disadvantage of these multicarrier formats, that also reduces SE. The benefits of digital signal processing (DSP) schemes can be explored with the general purpose of SE enhancements [14,16]. Beyond the signal transformation that creates CE-OFDM signals, the generation of asymmetrically clipped OFDM (ACO-OFDM) signals, as well as the employment of the discrete Hartley transform (DHT) in subcarriers multiplexing are some of the DSP benefits that can play important roles in both performance and SE of VLC systems [17,18]. Therefore, in this paper, we propose a 3D modulation format that creates a correlation among three analog multicarrier signals used to modulate RGB LEDs, instead of considering each one of them as independent signals as is in uniform WDM transmissions, overcoming the challenge of having different channels in an RGB-based WDM VLC. The proposed 3D modulation format previously groups the data bits in code-words of six bit-length. The code-words are then divided into two main streams, with the first composed of the two most significant bits, used to generate DHT-based signals that modulates the green LED. The second stream is composed of the OFDM symbols generated from the remaining four less significant bits of the code-words. The imaginary part of the OFDM signal obtained with fast Fourier transform (FFT) is used to modulate the blue LED, whereas the real part of this OFDM signal is transmitted through the channel that employs the red LED. An experimental demonstration of the 3D mapper, composed of four levels and 16-QAM subcarrier mapping is described. Experimental results obtained after the transmission of a 27.3 Mbps (in a bandwidth of 5 MHz) OFDM system through a 1.5 m VLC channel, show the feasibility of the proposed system. We believe that the proposed 3D mapping and non-uniform multiplexing paves the way for new modulation format designs for unbalanced WDM transmissions. Related works VLC is being suggested for various use cases, exciting investigations like those outlined in [19,20] and [21], among others. A soft integration with other technologies like the hybrid solution described in the study presented in [22] is also pursued, in sce-narios where a backbone network is demanded. However, the focus of this work is in RGB-based VLC that assist the spectral efficiency requirement. The experimental realisation described in [7] is one of the first that utilised commercially available RGB LEDs in WDM transmissions. The three LEDs were individually biased and modulated, aligned to the fact that, at the receiver, each colour was tested separately with a selective band-pass optical filter. The authors successfully achieved Gbit/s connectivity without affecting the general illumination of the indoor environments. The same authors increased the length of their VLC links, as can be seen in the demonstrations provided in [23]. Spectral overlaps that can be generated by the wide optical bandwidth of the filters introduce ICI in such WDM optical systems. The authors of [24] joined an RGB transmitter to a complimentary metal oxide semiconductor (CMOS) image sensor as a receiver to address this issue. The implemented MIMO with CMOS sensors mitigates the ICI, demodulating the three independent colours in the rolling shutter pattern. The CMOS image sensing was also explored by Chow et al. in [25] to improve their work using RGB LEDs. Their experimental results demonstrate non-flickering communication in a VLC link of 100 m. The aforementioned DSP benefits that clearly enhance the SE of the VLC multi-users access scheme proposed in [26] was also exploited in the RGB LED-based optical camera communications (OCC) demonstrated in [27]. The carrierless amplitude, phase modulation and subcarrier multiplexing combination provided in [26], and the undersampled phase-shift OOK, WDM and MIMO aggregation afforded in [27] assure a spectral efficiency and a space efficiency equal to 5.08 b/s/Hz and 3 b/Hz/LED, respectively. The experimental results presented in [27] demonstrate a successful communication in an OCC link of 60 m, using a single commercially available RGB LED and a standard camera. Spectral efficiency was also experimentally addressed in [28] through a multiplexing of non-orthogonal subcarriers. The authors achievement was counterbalanced by a serious ICI that demanded complex DSP at the receiver. The high bit rate achieved in [28] was tremendously increased by Bian et al. in [29] using off-the-shelf LEDs. The 15.73 Gb/s was achieved in an OFDM-based VLC system with WDM [RGBY (Y: yellow)] and a hard decision forward error correction (FEC) coding. The authors also provides a design of a low cost circuit board that can be adopted in simple VLC receivers. Our contributions In this paper, we consider WDM transmissions in VLC systems with RGB LEDs modulated by three correlated multicarrier signals. The real and imaginary parts of FFT-based OFDM signals are used to modulate the red and the blue LEDs, respectively, whereas the DHT-based OFDM signals modulate the green LED. The feasibility of a 3D mapper, that represents a symbol mapping that results of a 6 bits partition, is experimentally demonstrated in VLC links of at most 1.5 m, employing commercially off-the-shelf RGB LEDs. Therefore, the contributions of this work include: (I) the transmission over RGB channels of three correlated OFDM signals generated without Hermitian symmetry, thus enhancing the system SE; (II) the proposal of a 3D mapper that exploits euclidean distance enlargement, thus providing performance enhancements and; (III) a suggestion of a simple synchronisation method adapted to the procedure used in offline evaluations, that also avert the need of the redundant cyclic prefix (CP) of conventional OFDM signals. The remaining is organised as follows. Section 2 briefly describes a theoretical background about the multicarrier signals and the proposed 3D mapper, as well as the channel model that emulates the VLC channel explored in the experiment. A block diagram of the proposed VLC system is described in Section 3. The experimental setup and the experimental results are described in Sections 4 and 5, respectively. The concluding remarks are made in Section 6. THEORETICAL BACKGROUND In this section we develop the fundamentals of generation OFDM signals based on FFT and DHT. The proposed 3D mapper is described and a basic theory on the VLC channel model is provided. IFFT-and DHT-based OFDM signals Neglecting the cyclic prefix (CP), a single IFFT-based discrete OFDM signal can be written as , (1) in which N represents the amount of multiplexed subcarriers and X k the constellation points generated by a symbol mappings [30]. Due to the complex-value nature of x(n), an Hermitian symmetry is required at the input (a set of complex conjugate terms) of the IFFT to ensure real-valued signals, demanded by technologies like VLC [15,31]. To avert the need of HS, some authors employ the DHT to multiplex real-value constellations such as binary phase-shift keying (BPSK) and pulse amplitude modulation (PAM) [32,33]. The Hartley transform is closely related to the Fourier transform, with the difference that the Hartley transformation produces a real output for a real input, and is its own inverse. This means that it is not necessary to use HS to obtain real values, which reduces the computational cost because it only requires real arithmetic calculations. A DHT-based OFDM signal can be where Y k stands for the mapped symbols and cas(⋅) = cos(⋅) + sin(⋅). Because all subcarriers carry data, the DHT-based OFDM doubles the SE, when compared to the IFFT-based signals with the same modulation level. The proposed 3D symbol mapper The employment of the HS in the OFDM signal generation results in a 50% reduction in the spectral efficiency [34,35]. It is also aiming the elimination of the HS to enhance SE, that we propose the transmission of three correlated multicarrier signals that results in a 3D constellation. Thus, all N multiplexed subcarriers carry data, which means that, in our proposal, blocks of N × L bits are processed in the generation of the correlated OFDM signals. In other words, a data bitstream of length N × L is divided in a sequence S = {s 0 , s 1 , … , s N −1 } of N s i subsequences composed of L bits, before symbol mapping. In its turn, each sub-sequence is divided into sub-blocks of M bits (the most significant) and R bits (the remaining), that is, L = M + R. In this work, we used L = 6 to do the partition shown in Table 1. Then, the bits of a n are mapped to QAM symbols denoted as x k + j ⋅ y k (for j = √ −1), and the bits of b n mapped to PAM symbols (that designates surfaces) as explained in Algorithm 1. If the value obtained from a binary to decimal conversion of the bits b n is even, the correspondent QAM constellation produced by the a n bits is rotated according to a phase increment of ∕4 rad, in order to maximise the Euclidean distance between closest symbols of the neighboring QAM constellations. In the rotation process, the used rotation matrix is expressed by in which x ′ k and y ′ k represent the real and imaginary part of the rotated QAM symbols, respectively. After the aforementioned process, two sets of data are obtained, a QAM and a PAM data set. The overall procedure results in an 3D constellation as depicted in Figure 1. The VLC channel model Indoor VLCs can be characterised as LOS and non-LOS links [36]. A non-directed link classification can be defined if a communication between a divergent transmitted beam and a large field-of-view (FOV) receiver is established. Power and data rate, provided by multi-path propagation effects, are limitations of such denominated diffuse links. Nevertheless, the impairments introduced by non-LOS channels is not the focus of this work. An LOS communication can be conceived with carefully aligned transceivers, in order to overcome the above-mentioned drawbacks. In such scenario, the received power is P rLOS = H LOS × P t , for P t the average transmitted optical power and H LOS the channel DC gain given by for m the Lambertian emission order, A r the photodetection area, d the transmission distance, the irradiance angle, the angle of incidence, g( ) the gain of an optical concentrator and Ψ c is the FOV [37]. The Lambertian order is associated to the LED semi-angle at half-power Φ 1∕2 and it is obtained as m = ln (2) ln(cos Φ 1∕2 ) [37]. SYSTEM MODEL To transmit the data accommodated in the 3D mapper, we propose the VLC system depicted in Figure 2. The 16-QAM symbols illustrated in Figure 1 are multiplexed via an IFFT, generating a complex-valued OFDM signal. The real (Re) and imaginary (Im) parts of this signal are managed as two different multicarrier signals. The PAM symbols, that represent the four surfaces illustrated in Figure 1, are multiplexed with the DHT, providing the third multicarrier signal. In order to perform synchronisation and channel estimation at the receiver, we add pilot signals in the beginning of each frame, as shown in Figure 3a. Thus, 10% of the frame duration is composed of five equidistant pieces, three with zeros and two with alternating amplitudes between −0.25 and 0.25, as emphasised in the zoom depicted in Figure 3b. The Re part of the OFDM signals modulates the red LED and the Im the blue one. The real coefficient signals obtained with DHT modulates the green LED. It is important to notice that the ana-log signals are DC biased before the modulations. Note that no CP is implemented in the proposed system. After the propagation through a free space LOS channel, that follows the digital-to-analog conversions (D/A), the optical signals are detected with a single photodiode. Optical filters are installed before each receiver to provide wavelength selectivity. Each colour filter only allows the passage of a single colour, which generates individual voltages proportional to the intensity of each colour. This allows a neglect of an ICI that may occur during photodetection [38]. At the receiver, the Re and Im signals are combined after photodetection, analog-to-digital conversions (A/D) and pilot removal. The joined complex-valued signal is then demultiplexed via the FFT processing, before conventional equalisation through one-tap equaliser [39]. A similar process is used in the signal obtained from green channel, excepting the demultiplexing provided by the DHT and the demapping by a common PAM decision. Computational complexity analysis A computational complexity analysis, based on the formulas specified in [40] and on the contributions of [32], is provided in this section. It is made in terms of the minimum number of real multiplications and additions that are necessary in the transmission of a single sequence S = {s 0 , s 1 , … , s N −1 }, as well as in the symbols demodulation and recovery. Therefore, knowing that in the proposed modulation format all N multiplexed subcarriers carry data, and considering that the number of real multiplications and additions required in the multiplexing process are N [log 2 (N ) − 3] + 4 and 3N [log 2 (N ) − 1] + 4, respectively [40], the total number of real operations required in the multiplexing of the complex-valued symbols generated by the remaining bits R is given by for C DFT R the number of multiplications and A DFT R the number of additions. It is important to notice that these numbers consider that a one-tap equaliser is employed in the channel equalisation process. The total number of operations required in the DHT multiplexing of the symbols generated by the bits M is given by [32] for C DHT M the number of multiplications and A DHT M the number of additions. In this case, it is considered that the same multiplexing process is used in the demultiplexing, and the one-tap equaliser is also used in the channel equalisation. Thus, in case of 64 data subcarriers, the minimum total number of operations required in the proposed modulation format is obtained considering C DFT Model validation through numerical simulations In order to validate the proposed model, we conduct numerical simulations of the above-described system in additive white Gaussian noise (AWGN) channels. We consider that one data frame is composed of 93 OFDM signals, each one comprised of 64 data subcarriers, in both IFFT and DHT multiplexing. This size of the frame was chosen due to the limitation introduced by a mixed domain oscilloscope (MDO) used in the experiments as A/D. It limits the capture window to a maximum of 10,000 points for offline signal processing. Therefore, it should be stressed that, to obtain each value of BER, each data frame (composed of 93 pseudo-random binary Figure 4 shows performance evaluations in terms of bit error rate (BER) at several values of energy per bit to noise power spectral density (E b N 0 ), considering different modulation levels in the 3D mapping. The approximation between the numerical and the theoretical curves (expression in [41]) shown in Figure 4 validate the implemented numerical model. For the experimental demonstration, we choose the 4-PAM and 16-QAM configuration, illustrated in Figure 4b, for the bit partitioning described in Table 1, due to the trade-off between complexity and bit rate imposed by the proposed scheme. This configuration provides bit rates higher that the 4-PAM and 8-QAM with the performance provided in Figure 4a, and it is less complex that the one with the performance provided in Figure 4c. EXPERIMENTAL SETUP A block diagram of the setup used to validate the proposed VLC system is illustrated in Figure 5. The 5 MHz digital signals were generated offline using Matlab and loaded into a 250 Msample/s arbitrary function generator (AFG) used as D/A. The analog signals were amplified and superimposed onto bias currents of ≈ 300 mA (for a bias voltages between 7 and 11 V), to increase of the modulation depth of the LEDs. The output of the Picosecond Pulse Labs Bias-Tee (I DC ≤ 500 mA) was directly supplied to the off-the-shelf RGB LEDs, centered in the wavelengths shown in Figure 6. The transmitter lens (Model 41734.2E) has a focal length of +100 mm. After propagation over a LOS channel supported by biconvex optical lenses, the VLC signals were detected by a HAMAMATSU S10784 photodiode, before A/D conversion by a 2.5 Gsample/s MDO and offline processing of the IFFTbased and the DHT-based OFDM demodulations. Due to the restrictions of the experimental setup, the reception was made independently (one channel at a time) using the same receiver, but changing the colour filter. Therefore, the synchronisation issue related to latency, provided by the transmission of different wavelengths combined to multi-path effects, is not the focus of this work [38]. The RGB LEDs characterisation The optical modulation bandwidth and the driving current of each LED are crucial parameters of the experiment. The drive current determines the available signal amplitude limits above which clipping distortions are introduced due to the LED nonlinearity. Hence, to quickly forecast the performance of our proposal, we first measure the frequency response and output linearity of the employed LEDs in terms of illuminance. Figure 7a shows that −3 dB bandwidths around 10 MHz are evident in all RGB LEDs. It can be seen from Figure 7b that the output illuminance of the LEDs are significantly different. At the same driving current of 300 mA, the green shows the highest output illuminance. The blue achieves an output power almost 39% of the green, whereas the value measured for the red is almost 30%. For these reasons, we choose to modulate the green LED with the signal obtained with the mapped bits that represent the surfaces, that is, the PAM axis of the 3D mapper used to separate the QAM plans. Figure 8 shows a received constellation after propagation through a VLC channel, in which the distance between the transmitter and the receiver is 1 m. The diagram of Figure 8a illustrates the rotation described in Section 2.2, and the surfaces depicted in Figure 8b demonstrates the feasibility of the proposed VLC system at the mentioned distance, highlighted by the 3D plot depicted in Figure 8c. In order to reveal potential penalties provided by signal saturation at short reach and by signal attenuation at longer distances (> 1 m), we evaluate performance analysis of the proposed system in terms of error vector magnitude (EVM) at different distances. Figure 9 shows the EVM in terms of distance, measured at all 16-QAM signals in all PAM surfaces. EXPERIMENTAL RESULTS As expected, Figure 9 shows that the performance deteriorates with the link length, especially for distances greater than 1 m. With the PAM surface −1 (b n = 01) at 1 m, the measured EVM is around −20 dB, contrasting with an EVM ≈ −10 dB measured at a distance of 2 m, as illustrated by the constellations shown in the inset of Figure 9. The performance of all surfaces are almost the same, with a slight difference for the PAM surface −1 at 0.5 m. Although the fact that this can be explained by signal saturation, we can assert that, at the implemented conditions, this non-linear effect does not affect the overall system performance. CONCLUSION A VLC system with RGB LEDs for transmission of multicarrier OFDM signals that uses all available spectrum, was proposed and experimentally demonstrated. The system employs aFFT to multiplex QAM symbols organised in a 3D signal mapper, designed with the support of PAM symbols multiplexed by a DHT. To avert the need of Hermitian symmetry, the real coefficient signals of the FFT-based OFDM signals were transmitted in the red LED and their imaginary coefficient through the blue LED. The real signals provided by the DHT were transmitted via the green LED. Experimental results obtained after the transmission of a 27.3 Mbps OFDM system trough a 1.5 m VLC channel show the feasibility of the proposed system in indoor environments. A simultaneous RGB transmission is part of our future works, to address the impact of propagation delays among others nonlinear effects, and therefore, encompass applications like visible light positioning, as well as massive multi-user schemes.
v3-fos-license
2020-10-28T19:20:41.135Z
2020-10-14T00:00:00.000
226322676
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.rjeec.ro/Article/DownloadFile?fileName=cf641b9f-ae7d-4b17-8d46-ca54a943db09_22.pdf&subfolder=articles&title=Comparative%20assessment%20of%20air%20pollutant%20emissions%20from%20brick%20manufacturing", "pdf_hash": "9567ec3c93810e76a4a9c408bb69c6ef6af9d8cd", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41738", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "dd66054fecd7d0f698be5b5b5b843261e56d94c3", "year": 2020 }
pes2o/s2orc
Comparative assessment of air pollutant emissions from brick manufacturing In this paper, a comparison is made of the level of air pollution between two brick production lines that apply different technologies, one old and one new, and more efficient. The main pollutants emitted in the air from the baking kilns are CO, SO2, NO2, HCl, HF, and dust. The monitoring of emissions was performed with a Testo 350 flue gas analyzer – the automatic method. A Paul Gothe isokinetic sampler was used to take dust, HCl, and HF sampling, and the analysis was performed in the laboratory using gravimetric and spectrophotometric analytical methods. The results of the tests performed showed a reduction in the level of pollution by applying the new and BAT technologies by up to 90% for all monitored pollutants, compared to the pollution produced by old and non-re-technologized line. At the same time, energy consumption is lower per unit of product, which results in a significant decrease in production costs. INTRODUCTION In the brick manufacturing industry, air pollution is due to the processes of burning fuels in the heat treatment furnaces of the bricks and the drying lines of raw (wet) products. By applying new, high-performance production technologies, BAT decreases fuel consumption, increases productivity, and air pollution is lower compared to old manufacturing technologies. The main gaseous components emitted into the air are CO, SO2, NO2, and particles (derived from the combustion of fuels) [1,2], as well as HCl and HF resulting from the burning of clay in the process of baking raw bricks. These pollutants, together with road traffic emissions, have an important contribution to zonal air pollution [3,4]. This paper compares the level of air emissions from two brick production lines that apply different production technologies. Thus, we have a line that applies an old technology and a line that applies a production BAT technology [5][6][7]. The technological flow that takes place on both production lines includes the following main phases: a) preparation of the material, shaping of the products, handling of the raw products; b) drying of products, handling of dry products; c) baking of products, evacuation of products. In Romania, Order no. 462/1993 and Law 278/2013 [8, 9] limit air emissions from technological processes. The emission limit values (ELV) for pollutants specific to this industry regulated by Order 462/1993 are presented in Table 1. (the level is dependent on the composition of the raw material) NOX < 250 (for combustion processes taking place at temperatures below 1300 0 C) SO2 < 500 (when using raw materials with sulfur content <0.25%) Dust particles 1-20 (interval for drying processes, gas fuel combustion) HCl 1-30 (the level depends on the composition of the raw material) HF < 10 (the level depends on the composition of the raw material) *The emission limit values refer to oxygen content in the flue gases of 18% (in volume), under normal conditions 273°K and 1 atmosphere. EXPERIMENTAL PART Description of monitoring site Emission measurements were performed in June 2020 on pollutant dispersion chimneys [10], related to 2 manufacturing lines of ceramic products that produce elements for burnt clay masonry (bricks and ceramic blocks of different types and sizes). Manufacturing line 1 (old) -with a production capacity of 200 t/day (operation 24 h/day), it was put into operation in 1985. The products are burned in a tunnel oven, with a length of 160 m, which works with natural gas. The oven has three work areas: drying, burning, and cooling. In the combustion zone, the products are heated at 990°C for 2 h, after which the temperature starts to drop. At the boundary between the drying and burning areas is the flue gas fan which discharges them into the atmosphere by means of a chimney with a diameter of 0.70 m and a height of 10 m. Manufacturing line 2 (new) -with a production capacity of 400 t/day (operation 24 h/day) it was put into operation in 2010. The products are baked in a tunnel oven, with a length of 140 m, the combustion cycle lasting about 20 hours. The furnace works with natural gas, the combustion zone is the area where the products are baked in a maximum temperature range between 950-1000°C, the combustion curve being electronically controlled according to the raw material parameters. In the cooling area of the products, there is an installation for the recovery of hot gases and heat from the baked products, which are directed to the dryer where they are used to dry the products. The tunnel kiln is provided with a flue gas evacuation chimney -corresponding to the combustion zone -with a diameter of 1.00 m and a height of 20 m. Equipment Measurements of physical parameters and sampling of pollutants from emissions were performed at the two dispersion chimneys, using the TESTO 350 XL Gas Analyzer and the Paul Gothe isokinetic sampler. RESULTS AND DISCUSSION Emission measurements at both lines were performed with the same equipment and the same analysis methods were applied. During the emission monitoring, the two production lines operated under normal conditions, at the designed parameters. The measurement results, the hourly average, are presented in Table 3. Fig. 1 and Fig. 2). The values of the low concentrations emitted by line 2 can be observed, compared to the emission limits from the national legislation and the BAT values. Given the level of concentration values for the two pollutants, a lower combustion efficiency on line 1 is demonstrated, which leads to higher values of the pollutant concentrations. By implementing BAT technologies, more efficient burners are used, with reduced fuel consumption, resulting in lower values of nitrogen oxides, carbon monoxide, and dust. We can conclude that to obtain low concentrations of the pollutants emitted in the air, which fall within the BAT values, it is necessary to apply new production technologies that use more efficient burners, and combustion is computer-controlled, resulting in lower pollutant emissions. By applying BAT technology, two problems are solved at the same time, reducing by up to 90% the pollutant emissions, but also a production efficiency materialized in a lower cost per unit of product. CONCLUSIONS Industrial pollution combined with road traffic pollution can greatly worsen air quality in an area. In addition to the measures taken to reduce pollution caused by motor vehicles, the authorities also impose measures to reduce pollution for industrial activities. The pollution produced by the brick production activity can be reduced by applying more environmentally friendly production technologies. In this paper, a case study was performed in which the pollutant emissions in the air from two brick production plants with different technologies were compared, a new one, less polluting than the other that applies an older technology. The gaseous components analyzed are CO, SO2, NO2, dust, as well as HCl and HF, these resulting from the process of baking the clay in the composition of the bricks. Comparing the two production lines, it turned out that the manufacturing line 2 is superior to line 1 in terms of production capacity, energy efficiency, and environmental protection. The monitoring of emissions into the atmosphere from the emissions chimneys related to the furnaces of Lines 1 and 2, carried out in June 2020, highlighted the compliance with BAT values in terms of air pollutant emissions for all pollutants emitted from line 2 with new technology production, the technology according to BAT. For line 1, with old technology, the level of emissions is much higher for all pollutants emitted, compared to the emissions from line 2. The level of concentrations of pollutants emitted into the air from line 1 is up to 90% higher than line 2. Better combustion efficiency, lower heat loss, and lower emissions in the air lead to the decision to refurbish a brick factory, with remarkable results in reducing pollution and unit cost of the product.
v3-fos-license
2020-03-14T13:04:14.772Z
2020-03-01T00:00:00.000
212693014
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4409/9/3/671/pdf", "pdf_hash": "44dd10ed0d782a008eadd31617a4b910f699bc8e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41739", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "533bd613fedce791195cd41b29206c4ed982619d", "year": 2020 }
pes2o/s2orc
Immune Clearance of Senescent Cells to Combat Ageing and Chronic Diseases Senescent cells are generally characterized by permanent cell cycle arrest, metabolic alteration and activation, and apoptotic resistance in multiple organs due to various stressors. Excessive accumulation of senescent cells in numerous tissues leads to multiple chronic diseases, tissue dysfunction, age-related diseases and organ ageing. Immune cells can remove senescent cells. Immunaging or impaired innate and adaptive immune responses by senescent cells result in persistent accumulation of various senescent cells. Although senolytics—drugs that selectively remove senescent cells by inducing their apoptosis—are recent hot topics and are making significant research progress, senescence immunotherapies using immune cell-mediated clearance of senescent cells are emerging and promising strategies to fight ageing and multiple chronic diseases. This short review provides an overview of the research progress to date concerning senescent cell-caused chronic diseases and tissue ageing, as well as the regulation of senescence by small-molecule drugs in clinical trials and different roles and regulation of immune cells in the elimination of senescent cells. Mounting evidence indicates that immunotherapy targeting senescent cells combats ageing and chronic diseases and subsequently extends the healthy lifespan. Introduction Cellular senescence is a cell state in which the cell-cycle is generally irreversibly stopped [1], with cell-cycle reentry being a plausible scenario under specific circumstances, particularly in tumor cells [2]. Cellular senescence is significantly distinct from cell quiescence, which has a reversible cell cycle arrest. Cellular senescence is also different from cell terminal differentiation accompanied by generally irreversible cell cycle arrest but without macromolecular damage [1]. There are two major types of cellular senescence: stress-induced premature cellular senescence [3] and replicative senescence due to a repeated cell cycle, which is usually mediated by telomere shortening [4]. Cellular senescence plays beneficial critical roles in numerous biological processes [5], such as tumor suppression [6,7], embryonic tissue remodeling [8] and wound healing after injury [9]. p16 Ink4a (encoded by the INK4a/ARF locus, also known as CDKN2a, hereafter referred to as p16)-induced senescence of mouse and human pancreatic beta cells also promotes insulin secretion [10]. However, excessive accumulation of senescent cells causally shortens the healthy lifespan [11] and drives organ ageing [12], age-related organ deterioration/disorders [13,14], tissue dysfunction and chronic diseases, including cardiovascular diseases (CVDs) [15,16], cancer [2], neurodegenerative diseases [17,18] and osteoarthritis [19]. Usually, sudden or acute senescence will exert beneficial functions such as anti-fibrosis [20] and wound healing [21]. Therefore, the homeostasis of cellular senescence is crucial for normal physiology. Generally, cellular senescence is caused by various intrinsic and extrinsic factors, including telomere (the repetitive sequences of DNA at the end of eukaryotic chromosomes) Table 1. Cellular senescence leads to chronic diseases or tissue ageing in animals and humans. Types of Senescent Cells Disorders and Aged Tissues References Adipocytes Poor physical function, vascular dysfunction, cardiac ageing and a shorter health span and lifespan in mice [59][60][61][62] Astrocytes Neuropathology related to Parkinson's disease [57] Astrocytes and microglia Cognitive decline [18] Beige progenitor cells Age-related decline in beiging and thermogenesis [63] Beta cells Type 1 diabetes and type 2 diabetes [64,65] Cardiac progenitor cells Impaired heart regeneration [66] Cardiac fibroblasts Age-related cardiac fibrosis and dysfunction [61] Cardiomyocytes Cardiac ageing (fibrosis and hypertrophy) and heart failure [67][68][69] Cholangiocytes Liver fibrosis [70] Chondrocytes Osteoarthritis [71] Endothelial cells Atherosclerosis, artery stiffness, thrombosis and heart failure with a preserved ejection fraction [72][73][74] Endothelial progenitor cells Impaired neovascularization and preeclampsia [75] Fat progenitor cells Lipodystrophy and fat loss in old mice [76] Fibroblasts Atherosclerosis, lung fibrosis and decreased health and life span [77][78][79] Fibroblasts (in synovial tissue) Rheumatoid arthritis [80] Glial cells Neuropsychiatric disorders, including anxiety and depression [81] Hematopoietic stem cells Immune function decline [82] Hepatic stellate cells Liver fibrosis [20,83] Hepatocytes Age-related hepatic steatosis [84] Macrophages Atherosclerosis [47] Melanocytes Human skin ageing [85] Muscle stem cells Sarcopenia [82] Myofibroblasts Myocardial fibrosis reduction [86] Neural progenitor cells (SOX2 + ) Progressive multiple sclerosis [87] Oligodendrocyte progenitor cells Cognitive deficits in Alzheimer's disease mice [56] Osteocytes Age-related osteoporosis (bone loss) in mice [88] T cells Abnormal glucose homeostasis, insulin resistance, physical frailty [46,89] Vascular smooth muscle cells Atherosclerosis, AAA, TAA, artery restenosis, aortic calcification, vasomotor dysfunction in aged or atherosclerotic mice [48][49][50]54,90] SOX2, SRY (sex-determining region Y)-box 2. For definitions of other abbreviations, please see the main text. The availability of oxidized nicotinamide adenine dinucleotide (NAD + ) decreases with age and under certain disease conditions [91]. NAD + is usually generated by the kynurenine pathway of tryptophan catabolism [92] and salvage pathway [93]. NAD + precursor nicotinamide riboside (NR) prevents muscle stem cell senescence by improving mitochondrial function [94]. Treatment with NR rejuvenates muscle stem cells in old (aged 22-24 months) mice by inducing the mitochondrial unfolded protein response and synthesis of prohibitin proteins. Moreover, NR delays the senescence of neural stem cells and melanocyte stem cells and enhances the mouse lifespan [94]. Oral administration of nicotinamide mononucleotide (NMN), an essential NAD + precursor, to regular chow diet-fed wild-type C57BL/6N mice for 12 months remarkably and effectively mitigates age-related pathological alterations in mice without any noticeable side effects. For example, NMN suppresses age-associated body weight gain, promotes physical activity and improves insulin sensitivity, plasma lipid profile, eye function, tear production and bone mineral density [91]. NMN treatment also improves blood flow and increases endurance in old (aged~20-22 months) mice by promoting sirtuin deacetylase SIRT1-mediated induction of capillary density, an effect synergized by exercise [42]. Interestingly, a recent clinical trial (NCT03151239) for NMN safety in humans reported that single oral administration of NMN shows no significant clinical symptoms or alterations in the heart rate, blood pressure and body temperature, suggesting the single oral supplementation of NMN is safe and effectively metabolized in healthy men [95]. However, the potential therapeutic strategy of NMN for anti-ageing and chronic diseases needs to be further explored and characterized. It was reported that mitochondria-targeted gasotransmitter hydrogen sulfide (H 2 S) delays endothelium senescence [96]. Recently, substantial evidence indicates that H 2 S exerts a potential evolutionarily conserved function of anti-vascular ageing [42]. H 2 S plays this role via the regulation of endothelial NAD + levels [42] or post-translational modification of reactive cysteine residues by protein persulfidation (S-sulfhydration) [97]. Thus, the H 2 S generator sodium hydrosulfide could battle vascular ageing and chronic diseases. Rapamycin decelerates cellular senescence in vitro and in vivo via mechanistic target of rapamycin (mTOR). Recently, rapamycin was reported to exert in vivo neuroprotective and anti-ageing effects via the activation of lysosomal mucolipin TRP channels, independent of mTOR [98]. Moreover, rapamycin, the US Food and Drug Administration-approved mTOR inhibitor, has been shown to extend the median and maximal lifespans of both male and female genetically heterogeneous mice [99]. Twenty-four-month-old female C57BL/6J mice treated with rapamycin for 3 months present improved late-life vascular contractile function and antihypertrophic signaling in the aged heart with remission in age-associated inflammation. Rapamycin treatment also results in beneficial behavioral, skeletal and motor changes in old mice [100]. A phase 2a clinical study with a low-dose combination of a catalytic (BEZ235) plus an allosteric (RAD001) mTOR inhibitor that selectively inhibits mTOR downstream target of rapamycin complex 1 (TORC1) for 6 weeks demonstrates that mTOR inhibitor therapy is safe, enhances immune function, decreases the incidence of upper respiratory infections and improves the response to influenza vaccination in seniors [101]. The clinical trial NCT03103893 indicates that topical rapamycin treatment for 6~8 months decreases p16 expression associated with reduced cellular senescence of human skin and improves the clinical signs of ageing with increased collagen VII expression in the skin [102]. Metformin, a widely prescribed first-line oral drug to treat type 2 diabetes, inhibits cellular senescence in vitro and in animal models via multiple molecular and cellular mechanisms [103][104][105]. For example, metformin inhibits oncogene-induced SASP by blocking nuclear factor-κB activation in human diploid fibroblasts [106]. Interestingly, metformin has been reported to extend the healthspan in Caenorhabditis elegans via the liver kinase B1/5 AMP-activated protein kinase pathway [107] or changing microbial folate and methionine metabolism [108]. Recently, Pryor et al. reported that gut microbes integrate nutrition to regulate metformin effects on host longevity through the transcriptional regulator cAMP response protein (CRP)-mediated phosphotransferase signaling pathway. They predicted the bacterial production of agmatine (a product of arginine metabolism), a mediator of metformin effects on host fatty acid metabolism and lifespan extension [109]. The first clinical trial (phase 4) of the metformin effect on the biology of human ageing was launched as "The Metformin in Longevity Study (MILES)" in 2014. The results indicate that metformin modulates metabolic and nonmetabolic gene expression in skeletal muscle and subcutaneous adipose tissues of older persons [110]. The phase 2 clinical trial (NCT02570672) "Metformin for Preventing Frailty in High-risk Older Adults," which considered frailty as a vital endpoint, was undertaken since 2015 [111]. Another phase 4 clinical study (NCT02915198) investigating the outcome of metformin in patients with pre-diabetes and established atherosclerotic cardiovascular disease started in February 2019. The recent multicenter trial "Targeting Ageing with Metformin (TAME)" focusing on targeting ageing and chronic conditions may be supported by NIH in the future [112]. Given the uncertain side effects (vitamin B12 deficiency, lactic acidosis and gastrointestinal side effects) of metformin presenting in some individuals [104], personal metformin therapy for anti-ageing may demand more in-depth study. In addition, β-hydroxybutyrate (β-HB) may mediate the effect of a ketogenic diet [113,114], intermittent fasting [115] or exercise [116] on healthspan extension or neuroregeneration because of its anti-inflammation [117], inhibition of vascular cell senescence [118] or immune activation by the formation and maintenance of CD8 + memory T cells [119]. It would be interesting to test the action of β-HB on healthy ageing and chronic diseases in preclinical and clinical research. Interestingly, recent clinical trials with senolytics D + Q demonstrate promising results. The first-in-human open-label pilot study indicates that D + Q directly eliminates senescent cells in human adipose tissue and skin [120] and significantly improves the physical function of participants with idiopathic pulmonary fibrosis [45]. These results suggest that the strategy of combining treatments targeting different senescent cell populations with distinct phenotypes would be valid for anti-ageing and chronic diseases. Immunosurveillance of Senescent Cells Generally, senescent cells can be removed by apoptosis and the immune system [121,122]. Because apoptosis evasion features most senescent cells, the immune system, including adaptive and innate immune cells, plays a critical role in the eradication of senescent cells at a young stage or under physiological conditions. Although senescent cells can be induced to undergo apoptosis by senolytics [45,60,78,81,120], these apoptotic cells must be finally cleared by the immune system. Interaction between senescent and immune cells affects immune system function. Senescent cells recruit and make immune cells senescent and dysfunctional via SASP, leading to persistent and excessive accumulation of senescent cells [122]. However, the precise mechanism underlying senescent cell accumulation within tissues is still debatable. It is unknown whether this is due to senescent cell increase exceeding the immune system's ability to clear them or immune cell dysfunction. Maintenance of Immune System Function to Keep Healthy Longevity A study investigating human ageing at the individual level by frequent sampling and prolonged deep molecular profiling indicates that immune pathways are one of the significant pathways that alter with age [38]. Males generally have a shorter average lifespan than females partly due to fewer B cells and weaker B-cell-mediated humoral immunity inhibited by the CCL21-GPR174-Gαi pathway [123]. With ageing, human immune cells become senescent (known as immunosenescence). Many functions of the immune system progressively decline with age (younger than 100 years). For example, the number of inhibitory receptor natural killer group 2A (NKG2A)-positive CD8 + T cells in the blood of healthy volunteers dramatically increases with age. These highly differentiated CD8 + T cells can be inhibited by human leukocyte antigen (HLA)-E generated by senescent fibroblasts and the endothelium [124]. It was reported that aged mice [125] and humans [125,126] have an increased proportion and suppressive activity of FOXP3 + regulatory T (T regs ) cells, which suppress T effector cell function. Interestingly, a recent study using single-cell RNA analysis for circulating immune cells demonstrated that supercentenarians older than 110 years with healthy ageing have an increased number of CD4 + cytotoxic T lymphocytes (~25.3% of total T cells) compared with only 2.8% of all T cells in young controls (~50-80 year olds), while both the supercentenarians and control groups have almost the same number of T cells. These CD4 + cytotoxic T cells in supercentenarians are produced by clonal expansion and have an identical transcriptome as cytotoxic CD8 + T cells. However, the supercentenarians have a dramatic reduction of the B-cell number compared with controls [127]. These immune signatures of supercentenarians well explain that the increased healthy longevity is due to immunosurveillance of some conditions such as infections [128] and tumor development [129]. Immunotherapy to Eliminate Senescent Cells Mounting evidence demonstrates that immune surveillance of senescent cells is executed by different immune cells such as macrophages, natural killer (NK) cells and cytotoxic T cells in cancer [5,130] and chronic liver cirrhosis [20]. Different senescent cells generate distinct ligands that attract individual immune cells for immunosurveillance ( Figure 1). For example, senescence-activated hepatic stellate cells upregulate cell-surface MICA and ULBP2, ligands of activating receptor NKG2D on NK cells [20]. Presently, senescence immunotherapy is an emerging research arena [20,[131][132][133]. Senescence immunotherapy strategy is also a promising alternative to senolytics to remove senescent cells in the prevention and cure of ageing and chronic diseases ( Table 2). Different immune cells have a distinct capability to identify and eliminate the unique senescent cells. Here, we discuss the roles and regulation of vital immune cells in combating chronic diseases and ageing. CAR-T cells Fibroblasts Fibrosis reduction in mouse heart [134] CD4 + T cells Murine hepatocytes Suppression of mouse liver cancer [130] CD8 + T cells Fibroblasts N/A [124] Macrophages Uterine senescent cells Maintain postpartum uterine function in mouse [135] NK cells Hepatic stellate cells Liver fibrosis resolution in mouse [20] NK cells (in uterine) Decidual cells (endometrial stromal cells) Endometrial rejuvenation and remodeling at human embryo implantation [136] NK cells Myeloma cells Tumor suppression in mouse [137] NK cells Fibroblasts N/A [124] N/A, not available. Although senescent cells can be induced to undergo apoptosis by senolytics and further removed by macrophages, macrophages also directly engulf senescent cells in cancer (Figure 1). For example, p53 reconstitution induces liver tumor cell senescence with increased p16 and SA β-gal activity but not apoptosis, in mice in vivo. These senescent tumor cells recruit innate immune cells, such as macrophages, leading to the eradication of senescent tumor cells and subsequent tumor regression [145]. Kang et al. demonstrated that CD4 + T cells require monocytes/macrophages but not NK cells, to remove pre-malignant senescent hepatocytes, subsequently blocking liver tumor development [130]. F4/80 + macrophages are also key players in the removal of senescent uterine cells after parturition to keep postpartum uterine functionality in wild-type mice and maintain the success rate of a second pregnancy in a preterm birth mouse model [135]. Whether macrophages remove senescent cells in aged or diseased systems remains to be elucidated. apoptotic or senescent cells have increased expression of cell-surface protein CD47, one of the "don't eat me" signals, which impairs efferocytosis via binding to the inhibitory receptor signal regulatory protein alpha (SIRPα) on the macrophage. Antibodies blocking CD47 reactivate efferocytosis of diseased vascular tissue without altering cellular apoptosis, as well as alleviate atherosclerosis in both the aortic sinus and en face of the aorta in multiple mouse models [148]. Additionally, cyclindependent kinase inhibitor 2B (CDKN2B)-deficient apoptotic cells show decreased expression of calreticulin [149], which is a principal ligand required for engulfment activation via binding and activating LDL-receptor-related protein (LRP) on macrophages [150]. Thus, apoptotic cells without CDKN2B impair macrophages efferocytosis and result in the development of advanced atherosclerotic plaques with large necrotic cores. Moreover, supplementation with exogenous calreticulin restores the clearance of CDKN2B-deficient apoptotic cells by macrophages [149]. Therefore, it is crucial to understand the molecular mechanisms regulating macrophage phagocytic ability and apoptotic/senescent cell clearance in the progression and therapy of chronic diseases and organ ageing. Roles and Regulation of NK Cells in Senescent Cell Removal NK cells are involved in the elimination of senescent cells through interaction between the activating NKG2D receptor and its ligands expressed on senescent cells. For example, the YT human NK cell line preferentially destroys senescent IMR-90 fibroblasts [20]. This selectivity is due to selective upregulation of NKG2D ligands MICA and ULBP2 in senescent IMR-90 cells but not growing or quiescent cells [151]. Importantly, perforin-and granzyme-containing granule exocytosis Both macrophage and apoptotic or senescent cells control macrophage efferocytosis ability. Macrophages in the peritoneum, pleural cavity and lung alveoli constantly and efficiently engulf apoptotic cells at a steady state. The local tissue microenvironment programmes these macrophages with restricted responses to low doses of nucleic acid within apoptotic cells and lacking expression of toll-like receptor 9 (TLR9). Macrophage transcription factors Kruppel-like factors 2 (KLF2) and 4 (KLF4) are crucial controllers inducing gene expression necessary to silently eradicate apoptotic cells [138]. Recently, Yang et al. reported that the C-type lectin receptor LSECtin (Clec4g) on colon macrophages is required for macrophage engulfment and clearance of apoptotic cells, contributing to intestinal repair in dextran sulphate sodium-induced colitis [146]. Interestingly, activated Treg cells secrete IL-13 to stimulate IL-10 production in macrophages via binding to the IL-13 receptor. The elevated IL-10 signaling upregulates macrophage STAT3-mediated VAV1 (a guanine nucleotide exchange factor), which activates GTPase Rac1 to enhance apoptotic cell engulfment by macrophages [139]. It is noteworthy that the sustainable clearance of multiple apoptotic cells by macrophages requires dynamin-related protein 1 (DRP1)-mediated macrophage mitochondrial fission, allowing calcium release from mitochondria into the cytoplasm, triggered by the initial uptake of apoptotic cells [147]. DRP1-deficient macrophages present impaired efferocytosis in vivo and subsequently increased plaque necrosis in advanced atherosclerotic lesions of Western diet-fed LDLR knockout mice [147]. However, apoptotic cell fate also regulates macrophage efferocytosis ability. For example, apoptotic or senescent cells have increased expression of cell-surface protein CD47, one of the "don't eat me" signals, which impairs efferocytosis via binding to the inhibitory receptor signal regulatory protein alpha (SIRPα) on the macrophage. Antibodies blocking CD47 reactivate efferocytosis of diseased vascular tissue without altering cellular apoptosis, as well as alleviate atherosclerosis in both the aortic sinus and en face of the aorta in multiple mouse models [148]. Additionally, cyclin-dependent kinase inhibitor 2B (CDKN2B)-deficient apoptotic cells show decreased expression of calreticulin [149], which is a principal ligand required for engulfment activation via binding and activating LDL-receptor-related protein (LRP) on macrophages [150]. Thus, apoptotic cells without CDKN2B impair macrophages efferocytosis and result in the development of advanced atherosclerotic plaques with large necrotic cores. Moreover, supplementation with exogenous calreticulin restores the clearance of CDKN2B-deficient apoptotic cells by macrophages [149]. Therefore, it is crucial to understand the molecular mechanisms regulating macrophage phagocytic ability and apoptotic/senescent cell clearance in the progression and therapy of chronic diseases and organ ageing. Roles and Regulation of NK Cells in Senescent Cell Removal NK cells are involved in the elimination of senescent cells through interaction between the activating NKG2D receptor and its ligands expressed on senescent cells. For example, the YT human NK cell line preferentially destroys senescent IMR-90 fibroblasts [20]. This selectivity is due to selective upregulation of NKG2D ligands MICA and ULBP2 in senescent IMR-90 cells but not growing or quiescent cells [151]. Importantly, perforin-and granzyme-containing granule exocytosis but not death-receptor-mediated apoptosis, is required for NK cell-mediated killing of senescent cells [40]. Thus, mice with defects in granule exocytosis accumulate senescent stellate cells and display stronger liver fibrosis in response to a fibrogenic agent [131]. The ability of NK cells to kill senescent cells is controlled by multiple factors such as ligands from senescent cells. NKG2D receptor deletion instigates the accumulation of senescent stellate cells, resulting in increased liver fibrosis in mice in vivo [151]. Furthermore, NK cell activation by polyinosinic-polycytidylic acid [152] decreases the senescent cell number in the liver in vivo, leading to reduced liver fibrosis [20]. Chemotherapeutic drugs, including doxorubicin, melphalan and bortezomib, upregulate both the DNAX accessory molecule-1 (DNAM-1; CD226) ligand PVR (poliovirus receptor; CD155) and NKG2D ligands (MICA and MICB) on multiple myeloma cells presenting a senescent phenotype. These ligands enhance NK cell susceptibility [137]. Pereira et al. recently reported that senescent human primary dermal fibroblasts and endothelial cells express higher levels of the atypical major histocompatibility complex (MHC) Ib molecule HLA-E via the p38 signaling pathway than non-senescent cells. HLA-E expression is also increased in the senescent fibroblasts of human skin during ageing. HLA-E then inhibits immune cytotoxicity targeting senescent cells by interacting with the inhibitory receptor NKG2A expressed on NK cells [124]. However, p53-expressing senescent liver tumor cells recruit NK cells by inducing the chemokine CCL2 but not CCL3, CCL4 or CCL5, in senescent tumor cells without affecting the ligand expression of RAE-1 proteins in the tumor cells [153]. Collectively, agents blocking the interaction between HLA-E and NKG2A [124], the humanized anti-NKG2A monoclonal antibody monalizumab [154] and NKG2A protein expression blockers [155] would be promising strategies for the immune clearance of senescent cells and subsequent anti-ageing and chronic diseases. Additionally, chimeric antigen receptor (CAR)-NK cells may be a valuable tool for senescence immune surveillance. CAR-T Cells T cells play crucial roles in immune surveillance and healthy longevity. CD4 + T cells generally regulate the immune response via multiple cytokines. CD4 + cytotoxic T cells can directly kill senescent tumor cells by recognizing MHC II molecules, which are usually absent in healthy cells but present in a subset of tumor or senescent cells [130]. CD8 + cytotoxic T cells directly eradicate target cells using cytotoxic molecules via recognizing MHC I molecules within nearly all cells [127]. However, the selectivity and efficiency of cytotoxic T cells decline with age. Reinstructing cytotoxic T cells to identify specific antigens on cancer cells using either a modified T-cell receptor or a chimeric antigen receptor (CAR) has been successfully used for specific cancer therapies [156]. It was reported that high expression of fibroblast activation protein (FAP) in active cardiac fibroblasts but not cardiomyocytes, drives abundant cardiac fibrosis and consequent myocardial disease [134]. Recently, the adoptive constitution of engineered specific CAR-CD8 + T cells selectively targeting FAP notably alleviates cardiac fibrosis and reverses both systolic and diastolic cardiac function in angiotensin II and phenylephrine-treated mice [134]. Importantly, c-Jun overexpression protects CAR-T cells from dysfunction by inducing T-cell exhaustion resistance [157]. Because senescent cells generate specific cell-surface antigen, such as band 3 [158], it is very promising to develop specific CAR-T cells for the selective depletion of senescent cells. Notably, ligands or antigens to activate the receptors NKG2D and DNAM-1 or ligands HLA-E for the inhibitory receptor NKG2A, have demonstrated expression in senescent cells. Therefore, it may be possible to target senescent cells by engineering T cells expressing NKG2D-CAR (NKG2D-CAR-T cells) that recognize NKG2D ligands on the surface of senescent cells based on cancer research [159]. Dendritic Cells Dendritic cells (DCs), a professional phagocytic cell type, can also identify and eradicate apoptotic cells [160]. Notably, CD24 + DCs generally exert their removal function via T-cell regulation. For example, CD103 + DCs selectively carry apoptotic intestinal epithelial cells to mesenteric lymph nodes, which function as critical determinants to induce tolerogenic CD4 + regulatory T-cell differentiation in mice [161]. CD11b + DCs with dysfunctional autophagy due to Atg16l1 deficiency expand aortic CD4 + Treg cells and inhibit atherogenesis in Ldlr −/− mice [162]. Indoleamine 2,3-dioxygenase 1 (IDO1)-expressing and chemokine (C-C motif) receptor 9 (CCR9)-positive plasmacytoid dendritic cells (pDCs) in the aorta locally induce aortic Treg cells, which produce IL-10 and subsequently prevent atherogenesis in mice [163]. Theoretically, senescent cells may produce specific cell-surface antigens and then DCs process and express these antigens on the cell surface that are then identified by T cells. However, it remains extremely unknown whether and how DCs eliminate apoptotic or senescent cells to prevent the development of chronic diseases, including atherosclerosis. Conclusions and Future Perspectives Excessive and persistent accumulation of different senescent cells drives unique ageing and chronic disease development. Because senescent cells are also beneficial in the short term, the homeostasis of senescent cells is critical for healthy ageing. Several small-molecule drugs and senolytics have been entered into clinical trials to combat cellular senescence-associated ageing and chronic diseases. Although immune elimination of distinct senescent cells is an emerging and promising strategy for anti-ageing and the therapy of different chronic diseases, such immunotherapy is not cost-free and does have side effects. To translate this novel strategy into the clinic, we plan to carry out the following investigations: 1) discovering suitable molecular biomarkers or pathways for personal ageing in preclinical and clinical research; 2) identifying unique antigens or ligands of senescent cells for immunosurveillance; 3) including both genders for research because sex differences are present for ageing and chronic diseases [123,164]. Funding: This study was funded in part by the following agencies: National Institute on Ageing (AG047776), National Heart, Lung and Blood Institute (HL089920, HL110488, HL128014, HL132500, HL137371 and HL140954) and National Cancer Institute (CA213022). M.-H. Zou is an eminent scholar of the Georgia Research Alliance. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2023-04-21T06:16:25.420Z
2023-04-20T00:00:00.000
258239225
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "61cb33223e0c684b28dec15ae16e5c54903acfe4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41742", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d0d6c868f280a0a09e5da7cdf27b7509ab22def0", "year": 2023 }
pes2o/s2orc
Discharge against medical advice in Special Care Newborn Unit in Chattogram, Bangladesh: Prevalence, causes and predictors Introduction Discharge against medical advice (DAMA) is an unexpected event for patients and healthcare personnel. The study aimed to assess the prevalence of DAMA in neonates along with characteristics of neonates who got DAMA and, causes and predictors of DAMA. Methods and findings This case-control study was carried out in Special Care Newborn Unit (SCANU) at Chittagong Medical College Hospital from July 2017 to December 2017. Clinical and demographic characteristics of neonates with DAMA were compared with that of discharged neonates. The causes of DAMA were identified by a semi-structured questionnaire. Predictors of DAMA were determined using a logistic regression model with a 95% confidence interval. A total of 6167 neonates were admitted and 1588 got DAMA. Most of the DAMA neonates were male (61.3%), term (74.7%), outborn (69.8%), delivered vaginally (65.7%), and had standard weight at admission (54.3%). A significant relationship (p < 0.001) was found between the variables of residence, place of delivery, mode of delivery, gestational age, weight at admission, and day and time of outcome with the type of discharge. False perceptions of wellbeing (28.7%), inadequate facilities for mothers (14.5%), and financial problems (14.1%) were the prevalent causes behind DAMA. Predictors of DAMA were preterm gestation (AOR 1.3, 95% CI 1.07–1.7, p = 0.013), vaginal delivery (AOR 1.56, 95% CI 1.31–1.86, p < 0.001), timing of outcome after office hours (AOR 477.15, 95% CI 236–964.6, p < 0.001), and weekends (AOR 2.55, 95% CI 2.06–3.17, p < 0.001). Neonates suffering from sepsis (AOR 1.4, 95% CI 1.1–1.7, p< 0.001), Respiratory Distress Syndrome (AOR 3.1, 95% CI 1.9–5.2, p< 0.001), prematurity without other complications (AOR 2.1, 95% CI 1.45–3.1, p < 0.001) or who were referred from north-western districts (AOR 1.48, 95% CI 1.13–1.95, p = 0.004) had higher odds for DAMA. Conclusions Identification of predictors and reasons behind DAMA may provide opportunities to improve the hospital environment and service related issues so that such vulnerable neonates can complete their treatment. We should ensure better communication with parents, provide provision for mothers’ corner, especially for outborn neonates, maintain a standard ratio of neonates and healthcare providers, and adopt specific DAMA policy by the hospital authority. Introduction Discharge against medical advice (DAMA) is an unexpected event for patients and healthcare personnel. The study aimed to assess the prevalence of DAMA in neonates along with characteristics of neonates who got DAMA and, causes and predictors of DAMA. Methods and findings This case-control study was carried out in Special Care Newborn Unit (SCANU) at Chittagong Medical College Hospital from July 2017 to December 2017. Clinical and demographic characteristics of neonates with DAMA were compared with that of discharged neonates. The causes of DAMA were identified by a semi-structured questionnaire. Predictors of DAMA were determined using a logistic regression model with a 95% confidence interval. A total of 6167 neonates were admitted and 1588 got DAMA. Most of the DAMA neonates were male (61.3%), term (74.7%), outborn (69.8%), delivered vaginally (65.7%), and had standard weight at admission (54.3%). A significant relationship (p < 0.001) was found between the variables of residence, place of delivery, mode of delivery, gestational age, weight at admission, and day and time of outcome with the type of discharge. False perceptions of wellbeing (28.7%), inadequate facilities for mothers (14.5%), and financial problems (14.1%) were the prevalent causes behind DAMA. Predictors of DAMA were preterm gestation (AOR 1.3, 95% CI 1.07-1.7, p = 0.013), vaginal delivery (AOR 1.56, 95% CI 1.31-1.86, p < 0.001), timing of outcome after office hours (AOR 477. 15 Introduction The terms "discharge against medical advice," "leave against medical advice," and "self-discharge" all have been used to mean disregarding the doctor's advice and leaving the hospital early without caring about what might happen. Not all sick infants are fortunate enough to be admitted to the hospital [1] and when they are, it is a lost opportunity for them to leave before receiving all of their care. The neonatal period, a child's first month of life, is the most critical time for survival [2]. Self-discharge poses a unique challenge to neonates, as they not only have emotional and cognitive immaturity but also have no legal rights to decide for themselves [1,3]. When a patient or caregiver decides to leave the hospital against the consent of the managing physician, a conflict arises between the principle of autonomy and beneficence. Which of them takes priority is the question [4,5]. Again, the signed DAMA consent form by the caregiver may help, but it does not make healthcare providers and hospitals immune from legal implications [6]. The rate of DAMA in pediatric wards reported in recent years ranged from 1.2% to 31.7% [7]. Studies have documented a higher rate of DAMA in developing than developed countries [5,8]. Discharges against medical advice (DAMA) have an impact on mortality and readmission rates in both the pediatric and adult populations [9][10][11]; however, they may be significantly higher in neonates. There have been a few studies done in different NICUs internationally [1,[12][13][14], but there is a need for more data in Bangladesh. Bangladesh aims to reduce neonatal mortality to at least 12 per 1,000 live births by 2030 to meet the target of the Sustainable Development Goals of the United Nations. So quality care is needed for every neonate to reduce morbidity and mortality [15]. The study aims to ascertain the prevalence of DAMA among newborns admitted to a tertiary care hospital in Bangladesh, as well as the causes and identifying factors predicting DAMA to improve adverse outcomes of neonates. Study settings This case-control study was conducted over a period of six month from July 2017 to December 2017 at the SCANU in Department of Neonatology, Chittagong Medical College Hospital. It is one of the largest SCANUs in the country, with 32 government allocated beds and 56 locally arranged beds. The average number of admitted neonates ranges from 160-210 per day in 100 cots. Admission criteria are gestational age less than 34 weeks, birth weight less than 1800 g, and any sick neonate irrespective of gestational age and birth weight. Neonates from the entire division are referred to this SCANU, which covers a total area of 33,909.00 km2. As a tertiary and referral hospital, admission of sick neonates can not be denied in the SCANU.As a result, more than one neonate share the same cot. During study period, five consultants including two professors were working in the SCANU. There were 8-10 mid-level doctors who were on rotation from pediatrics as part of their neonatal training, and approximately 40 senior staff nurses were working in three shifts. The Chattogram division (Fig 1), located in Bangladesh consists of 11 districts and 99 subdistricts [16]. Chattogram district serves as the administrative zone. In this study, the division was classified into six areas based on administrative zone, geographic characteristics, and access to healthcare facilities for neonates. These are Chattogram district urban (city corporation area), Chattogram district non-urban (14 sub-districts apart from city corporation area), North-western districts (6 districts), Hill tracts (3 upland districts), Cox's Bazar (1 coastal district) and Islands. Sampling Non-probability consecutive sampling technique was used. We have enrolled the data of all neonates admitted in this study period on the day of their outcome. We defined DAMA cases as any neonate whose caregivers removed them from the hospital despite the attending physician recommending that they remain. Controls were those who were discharged on the advice of the attending physician. The number of neonatal admissions in this period was 6167. We excluded 1362 neonates who died in SCANU. Of the remaining 4805 neonates, a total of 3217 neonates were discharged with the medical advice of attending physicians. There were 1588 parents or guardians seeking DAMA. They all were included to avoid selection bias by the interviewers. The neonates of case and control groups were matched on demography, geo-economic backgrounds or birth related variables. All controls included in this study had to meet the criteria of having minimum one individual with comparable age, sex, mode and place of delivery and diagnosis though the ratio of case: control was 1: 2.36. For every DAMA, at least two controls were available. Procedure of data collection Data was collected through a questionnaire. A questionnaire was semi-structured including demographic profile, clinical diagnosis and reasons given by the parents for DAMA. The questionnaire was piloted on five female and five male guardians and was modified accordingly. The content of the questionnaire was developed and modified based on a literature search, a focus group discussion, and a pilot study. We mentioned all the opinions found from the previous steps. If parents chose a different reason other than mentioned options, they had to write it. At first, physicians and senior staff nurses counseled them to complete treatment, informing the prognosis and danger of discontinuation of treatment. Those intent on taking DAMA after counseling had to fill up the questionnaire where the specific cause of self-discharge had to be chosen or written by themselves or the attending senior staff nurse. The respondents usually took up to 10 minutes to complete the questionnaire. Ethical consideration The research received approval from the "Ethical Review Committee" of Chittagong Medical College, Bangladesh. All participants signed an informed consent form, and confidentiality was maintained. Data analysis The dependent measure or outcome variable for the present study was the discharge of the neonates which was classified into two categories: DAMA and discharge with medical advice. Data analysis was conducted in multiple phases. In the first phase, a simple descriptive analysis (frequency and percentage) was undertaken to evaluate demographic variables such as gender, residence, mode of delivery, place of delivery, and time and day of the outcome. In the second phase, statistical analysis was performed to explore the association between discharge type and other related variables. Associations between categorical variables were assessed using a chisquare or Fisher exact test where appropriate. We compared the characteristics of the two groups using the chi-square test (χ2). We used binary logistic regression to model the correlates of discharge against medical advice. To assess the accuracy of the model, a goodness-of-fit test was performed, which provided insight into how well the model fits the data. We included each characteristic in the regression model that was significant (p = 0.05) in the analysis in bivariate comparison. We report adjusted odds ratio (AOR) and confidence intervals (CIs) from this model. Results A total of 6167 neonates were admitted during six month study period from July 2017 to December 2017. Bed occupancy rate was 141.2% in the study period. Guardians of 1588 neonates discharged the neonate against medical advice even after being counseled by attending physicians and nurses. Among these neonates discharged against medical advice, 63.5% were admitted on 1st day of their life. SCANU is the tertiary care and referral center for neonates of greater Chattogram. Only 30.4% (n = 482) of DAMA neonates were referred from urban areas. Forty-eight percent (48.4%, n = 769) of neonates were admitted from outside the city corporation area. Two hundred nine patients (13.2%) were from north-western districts. Referred neonates from hill districts (3.1%), Cox's Bazar (3.2%) and different islands (1.8%) were also self-discharged. The maximum numbers (61.3%, n = 974) of neonates were male. The male-to-female ratio was 1.6: 1. About 69.8% (n = 1108) of neonates were outborn. Most neonates (54.3%, n = 863) had 2500 g or more weight at admission. Those delivered at term were 1186 (74.7%), while 402 (25.3%) were preterm. Spontaneous vaginal delivery comprised 65.7% (n = 1044). Out of the total DAMA neonates, 42% (n = 667) left the ward in the evening and 10% (n = 161) at night. The percentage of DAMA was less (31.1%) on weekdays among total discharges, whereas it was more (47.2%) on weekly holidays and festive times. The mean hospital stay of DAMA patients was 3.66±3.95 days. One-third of DAMA patients (32.8% n = 521) were self-discharged within the first day of admission, 32.4% (n = 514) during 2-3rd hospital days, 24.2% (n = 384) during 4-7 days and 10.6% (n = 169) had eight days or more hospital stay (Table 1). The percentage of DAMA was calculated at 25.7% in this work. Patient-related causes (40.2%, n = 639) were prevalent behind DAMA. Hospital-related (33.4%, n = 529) and familyrelated causes (23.1%, n = 367) were also noted. Common reasons identified for DAMA were the false sense of parents about the wellbeing of their babies (28.7%, n = 455), lack of facilities for the mother (14.5%, n = 231), financial problems (14.1%, n = 233), dissatisfaction with treatment (10.9%, n = 173), baby can take oral feeding (7.4%, n = 117), lack of attendants (6.9%, n = 110), fewer facilities for attendants (6.9%, n = 69). Guardians had a fear of observing the death of other sick neonates in SCANU (3.5%, n = 56) and a lack of hope for further improvement (4.2%, n = 67) ( Table 3). Parents who self-discharged their babies due to a false perception of wellbeing mostly suffered from asphyxia (45.7%) and sepsis (33.4%). There was a significant difference in family-related (p < 0.001) and patient-related (p <0.001) causes of DAMA between preterm and term neonates. Only thirteen caregivers had multiple causes to seek DAMA. Among these multiple respondents, two caregivers had financial problem and they also lost their hope for clinical improvement in the neonate. Six caregivers said about the false sense of well-being with the lack of facilities for mothers/ attendants. Five caregivers were unsatisfied with the treatment and also mentioned the lack of facilities for mothers. The chi-square test was used to study the relationship between variables with the type of discharge. A significant relationship was observed between the variables of residence (p <0.001), place of delivery (p <0.001), mode of delivery (p <0.001), gestational age (p <0.001), weight at admission (p <0.001), day and time of outcome (p <0.001) with the type of discharge. There was no significant relationship between gender (p = 0.107) and age at admission (p = 0.61) with the type of discharge ( Table 1). The results of logistic regression to investigate the impact of independent variables on the probability of DAMA were presented in Table 4. Our logistic regression model was significantly better as it explained more of the variance in the outcome, which was reflected by the highly significant chi-square value (chi-square = 2365.3, df = 17, p < .000). Additionally, the Hosmer-Lemeshow goodness-of-fit test suggested that our model was a good fit to the data (p = 0.6,>0.05). The new model performed better than the null model, with a correct (Table 4). Discussion DAMA is a major public health issue, poses an immediate risk to the life of the neonate [3]. The prevalence of DAMA in this study was found to be very high at 25.7%. This finding is similar to the 22.24% and 25.4% reported from NICUs in India [12,13] but higher than Nepal (18%) [1] and Nigeria (11.2%) [3]. Emergency admission and admission to the neonatal intensive care unit itself was reported as a significant independent risk factor for DAMA [2,8]. Prejudiced decision making by the family members contributed to increase DAMA among neonates [17]. The rate of DAMA differs with study setting; the time of the study, economic status and socio-cultural factors also influence the rate [13,18]. A retrospective review of 10 years of medical records of neonates found only 1.6% DAMA at a university hospital NICU in Saudi Arabia [18]. There was no significant gender bias among DAMA cases. This is consistent with the previous literature [12,19,20]. About two-thirds of the DAMA infants (65.7%) were born vaginally, which made vaginal delivery a potential risk for DAMA. This finding matched with previous studies in Saudi Arabia [18]. and western Nigeria [3] The higher DAMA was partly due to the early ambulation and discharge of the mothers compared to cesarean section. Bosco et al found that mother's antenatal check-up and parity influenced DAMA [10]; although we didn't collect such data. Unlike other NICUs at home and abroad, most of the DAMA neonates were outborn in our study. It was challenging for the neonates to stay here because their moms were either at another hospital or home. Most of the DAMA neonates had full-term gestation and standard weight at admission which matched with studies in India and Saudi Arabia [13,18]. Many parents had the wrong perception that a baby who was big and born at term was healthy [18]. DAMA was more on festive and weekends than any other days of the week. Similar findings were found by Kumar et al. and Turkistani et al. [13,18] Discharge decisions were typically made by the mid level doctors with the consent of consultants on the morning following the round. Due to a lack of staff and hospital policy, the discharge rate with advice was lower in the evening and almost zero at night. It could explain the increase in DAMA after office hours. Most of the neonates admitted in SCANU at their 1 st day of life. Percentage of admission gradually decreased with the age of the neonates. There was no significant variation between PLOS ONE discharge and DAMA group in this regard. Our study found higher rate of DAMA (90%) in first week; importantly, one-third of DAMA happened within the first 24 hours. Similarly 92.5% DAMA happened at a NICU in Lahore with 29.9% within the first 24 hours [17]. Newborns are at greater risk for illness in the first few days of life. The seriousness is augmented when dealing with a newborn at risk who is taken away from the intensive care unit against medical advice. On the other hand, only 20.4% discharge decisions were made within first 24 hours. The rate of discharge in first week was 77.2%. The percentage of DAMA in first week was 56%-73% in previous study [1,3,11,17,18]. The rituals of naming ceremony for neonates, typically held at seven days of age, may have contributed to the increased rate of DAMA during the first week [1,3]. In our study, we classified the residences of neonates into six classes and distinguished between urban and non-urban areas only within the Chattogram district. Around half (48.4%) of the DAMA neonates were admitted from non-urban areas, and 21.3% were from outside the Chattogram district, which contributed two-thirds of the DAMA. The residence of neonates reflects the availability of neonatal treatment facilities in the region. Apart from this particular SCANU, the other government SCANUs available in this division were in Cox's Bazar and Bandarban. It is worth noting that there were also private SCANUs available in the area, although they may have limited capacity and higher costs compared to government facilities. The number of DAMA patients was higher from rural areas in India [19]. DAMA is a major challenge for hospital treatment teams and it is practically common in all medical facilities and difficult to eliminate leading to unpredictable complications [2]. PNA, sepsis and prematurity without complications were the most common diagnoses in DAMA neonates. These diagnoses have also been identified by World Health Organization as the greatest cause of mortality of newborns in the developing world. Sepsis, PNA, respiratory distress, and congenital heart disease were the most prevalent diagnostic related categories in a study of India [12]. Prematurity (25.8%) exceeds PNA (23.9%) and sepsis (22.6%) in Nigeria [3]. Another two studies found additional causes of low birth weight (10.2%) and neonatal jaundice (10.2%) [13]: major congenital malformations (58%) and perinatal asphyxia (14%) [21]. Common congenital anomalies which lead to DAMA were congenital anorectal malformations, esophageal atresia/tracheoesophageal fistula, and congenital intestinal atresia [2]. In many studies, non-affordability or financial constraints were the most common reason for taking self-discharge [1,10,12,19]. Like a study done in Iran [20], we found false perceptions of the wellbeing of neonates by the parents as the most typical cause of DAMA and it was primarily found in term neonates. One-third of such neonates were treated and improved with supportive care and antibiotics for sepsis, so parents had wrong idea of taking discharge early. We had a few numbers of midlevel doctors and senior staff nurses. The average bed occupancy rate in the study place was above 100 percent. As all sick neonates were allowed for admission, we were overburdened with a huge number of neonates. The communication between health care providers and parents may not be the standard. Fortunately, the percentage of dissatisfaction (10.9%) was not much. Lack of facilities for mothers and attendants contributed 20% to DAMA, which was quite reasonable. Mothers require comfort after giving birth to a baby [20]. The hospital authority couldn't provide separate spaces for the mother and attendants. Most of the attendants and some mother had to sit and recline on the ground outside the SCANU after visiting hours. Financial constraints were the major cause of DAMA among preterm neonates. It may be explained by the fact that they required intensive care and a more extended hospital stay than term neonates. Belief in the incurability of the diseases, multiple malformations, concern about the prognosis of the diseases, family pressure, sociocultural thoughts, and inappropriate behavior with patients by hospital team were the causes of DAMA found in different studies [2,12,17,18,22]. False perception of wellbeing (34.4%), lack of facilities for mothers (15%) and dissatisfaction (13.6%) were the major causes allowed them to take rapid decisions about DAMA on the 1 st day. In our study, neonates who delivered vaginally had 1.56 times more chance for DAMA than neonates delivered by cesarean section. Though there was an increased percentage of outborn and term neonates having standard weight at admission among self-discharged neonates, it was not found to be predictors of DAMA in regression analysis. Rather, preterm neonates had higher odds (AOR = 1.37 95% CI 1.07-1.7, p = 0.013) for DAMA than term neonates. In our analysis, prematurity was a variable of gestation but was not noted as distinct diagnostic category as most of the preterm neonates were admitted with various complications such as asphyxia, sepsis, respiratory distress and they were entitled accordingly. Interestingly, when they were admitted only due to prematurity without any complications, the risk of DAMA increased (AOR 2.1, 95% CI 1.45-3.1, p <0.001) Neonatal sepsis (AOR 1.4, 95% CI 1.1-1.7, p<0.001) and Respiratory distress syndrome (AOR 3.1, 95% CI 1.9-5.2, p <0.001) were found as other predictors of DAMA in our study. Neonates often require longer stays in the NICU to complete antibiotic protocols, control seizures, and establish feeding. However, caregivers may have difficulty accepting these extended stays, especially when compared to the shorter hospital stays commonly seen in adults. Moreover, caregivers were not interested to complete the antibiotic regimen when neonates showed transient improvement after starting initial management. The grave outcomes of neonates may persist when cases of culture-proven sepsis go undetected due to DAMA. Bosco et al. found low birth weight (AOR 2.1, 95% CI 1.7-9.4, P = 0.030), low APGAR score (AOR 6.9, 95% CI 6.9, 95% CI 2.1-15.7, P < 0.001) and non life threatening congenital malformation (AOR 2.1, 95% CI 1.12.3, P = 0.005) as predictor of DAMA [10]. Turkistani et al found that the prevalence of DAMA was influenced by time factors (weekend and season). Wednesdays (27.3%) and the month of May (33%) individually represented one-third of all DAMA infants [18]. We also found substantial impact of holidays (festive and weekends) and the timing after office hour on DAMA. We didn't obtain month wise variation due to short study period. Neonates admitted from north-western districts had higher odds for DAMA. These six districts were situated at a minimum distance of 92 kilometers away from the Chattogram district. Since there was no government SCANU in these districts, neonates were managed in private hospitals. When the situation worsened or due to family demands, neonates were usually referred, and often without their mothers. Previous studies have addressed the role of residence in other districts and long distance between hospital and home in cases of DAMA [18,20]. Limitation Although we included all neonates within a specific time range, this was a single center study. After DAMA, we were unable to monitor the newborns, and we lacked precise information on their prognosis. No comprehensive maternal information was recorded, which could be a contributing factor to DAMA. Conclusion DAMA is a complex issue and needs attention. This study helped us learn more about DAMA in-depth. The prevalence of DAMA in this work is not less. False perception of the wellbeing of neonates, fewer facilities for mothers and financial problems are the leading causes behind DAMA in this study. Prematurity, vaginal delivery, timing of DAMA, residence at distant place was found to be important predictors. Neonates suffering from sepsis and RDS had more chance of DAMA. So care should be taken to handle such neonates including counseling of parents regarding completion of treatment on admission. A standard ratio of neonates and healthcare providers should be maintained for better management of neonates and better communication with the parents. We propose to arrange a separate corner for maternal rest and accommodation which may reduce the rate of DAMA. Hospitals should develop specific DAMA policies to safeguard healthcare professionals. Supporting information S1 File. Data July to December. (SAV)
v3-fos-license
2023-01-17T19:23:08.423Z
2023-01-11T00:00:00.000
255903966
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11044-022-09870-9.pdf", "pdf_hash": "2653e7113455ac358aed2bc1265e8316e67b03cd", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41743", "s2fieldsofstudy": [ "Engineering" ], "sha1": "b80f986d0c90f13ac5f1df32f146296b74ce7486", "year": 2023 }
pes2o/s2orc
Trajectory-tracking control from a multibody system dynamics perspective The development of modern mechatronic systems is often driven by the desire for more efficiency and accuracy. These requirements not only result in more complex system designs, but also in the simultaneous development of improved control strategies. Therefore, control of multibody systems is an active field of research. This contribution gives an overview of recent control-related research from the perspective of the multibody dynamics community. A literature review of the research activity in the journal Multibody System Dynamics is given. Afterwards, the framework of servo-constraints is reviewed, since it is a powerful tool for the computation of a feedforward controller and it is directly developed in the multibody system dynamics community. Thereby, solution strategies for all possible system types, such as differentially flat systems, minimum phase and non-minimum phase systems are discussed. Selected experimental and simulation results are shown to support the theoretical results. Introduction The desire for more efficiency drives the development of modern multibody systems in many application areas. This often results in increasing requirements on accuracy and/or manipulation speed of the respective system. These needs are met by improving the mechanical designs as well as the control strategies. Background and motivation Regarding the mechanical design, current development trends include the introduction of lightweight structures, more complex mechanisms and cable manipulators. Optimization strategies are then often used to find an optimal design of the system [1]. The aforementioned design trends often result in underactuated systems with more degrees of freedom S. Drücker svenja.druecker@tuhh.de R. Seifried robert.seifried@tuhh.de 1 Institute of Mechanics and Ocean Engineering, Hamburg University of Technology, Eissendorfer Str. 42,21073 Hamburg, Germany than independent control inputs [2]. This is, for example, due to unactuated flexible modes of lightweight mechanisms. Generally, underactuation leads to challenges for the controller design, because it is not possible to control all unactuated degrees of freedom individually. Therefore, the control strategies must be adapted according to the mechanical design. While most general control-related research is performed by the control theory community, there are also many contributions by the multibody system (MBS) dynamics community. One such contribution out of the MBS community are servo-constraints used for trajectory tracking control. Problem statement Generally, the control of mechatronic systems is a huge field, which is tackled by the control theory community as well as the communities of the respective application examples. There exist many review papers for different aspects of this problem. However, a review analyzing the important and relevant aspects of the intersection between control theory and MBS is lacking so far. As an efficient method for feedforward control, the framework of servo-constraints is developed by the MBS community itself. Many research papers address specific problems and individual aspects of this method. In particular, the contributions focus on either differentially flat, minimum phase or non-minimum phase systems. However, there does not exist a paper giving a comprehensive overview of the methods for all possible system types and thereby accompanying the complete process from system analysis to feedforward control design by means of servo-constraints. Moreover, there exist different mathematical formulations of the stable inversion problem in the context of servo-constraints. However, a thorough numerical comparison of the stable inversion formulations in terms of numerical efficiency and accuracy is missing so far in the literature. Scope and contribution The first part of this paper gives an overview about the research trends in control from a multibody dynamics perspective. This highlights the currently relevant control topics for the multibody system dynamics community. As one of the results of the literature survey, it is shown that model inversion is a crucial aspect for accurate control of complex multibody systems. Therefore, in the remainder of the paper, the framework of servo-constraints is reviewed in more detail, since it provides an efficient method for feedforward control of general multibody systems. This approach is a very natural approach for the MBS community, since the use of constraints is an inherent feature of MBS. Thereby, the solution strategies depend on the underlying system properties. All possible system types, such as differentially flat systems, minimum phase and non-minimum phase systems are discussed in a comprehensive manner. Application examples are given for each type in order to show the versatility of the method. For this purpose, existing results are recalled for differentially flat systems (overhead crane [3]). New results are presented for minimum phase systems in terms of a three-dimensional robotic manipulator. Moreover, new numerical results are presented for a comparison of the different formulations of stable inversion for non-minimum phase systems. In total, this comprehensive overview aims to close the gap in the literature for servoconstraints by giving a broad perspective and instructions for applying servo-constraints to a general multibody system regardless of the underlying system type. Organization of the paper The paper is organized as follows. Control-related contributions from the multibody dynamics community are reviewed in Sect. 2. Afterwards, the focus lies on an overview of the servo-constraints framework. As a basis, the mathematical model of general multibody systems is introduced in Sect. 3. This system analysis does not only apply to feedforward control, but also gives important insights for feedback control. The framework of servoconstraints is presented in Sect. 4. In Sect. 5, the approach is detailed for differentially flat and minimum phase systems. A cable robot and a three-dimensional manipulator with one passive joint serve as application examples. In Sect. 6, solution approaches are stated for non-minimum phase systems. A manipulator with one passive joint serves as an application example. Section 7 concludes the contribution with a summary. Literature survey on control approaches in multibody system dynamics A first overview of topics that are interesting for the multibody community can be obtained by focusing on papers published in the journal Multibody System Dynamics. The increasing research activity is reflected by an increasing number of contributions in the journal that are related to control aspects, see Fig. 1. Not only the absolute number, but also the relative amount of control-related contributions has increased over the past 25 years. In the following, the literature is briefly reviewed and selected publications are summarized. Current application examples lie in the areas of vehicle dynamics, space robotics, cable robots, parallel robots, mobile robots, and flexible multibody systems. Regarding the control concepts, feedforward and feedback strategies can be distinguished. Feedforward control is usually a control component for trajectory tracking, since the feedforward control moves the system on the prescribed trajectory. Regarding feedback control, various control strategies are applied in the context of multibody systems: optimal control, feedback linearization, robust control, and fuzzy control, to name only a few concepts. In order to give a broad overview of relevant topics, the 25 most cited contributions (20 through 99 citations as of July 2022) are categorized and summarized in the following. Feedforward control The most cited control-related contribution [4] in Multibody System Dynamics proposes a framework for the feedforward control of underactuated multibody systems, which have more degrees of freedom than independent control inputs. The inverse dynamics control problem is solved by introducing servo-constraints. The resulting differential algebraic equations (DAEs) are analyzed and compared to multibody systems with geometric constraints. The second most-cited contribution also deals with the feedforward control of underactuated multibody systems [5]. The inverse dynamics of a kinematically undetermined cable-suspension manipulator is computed with the help of a differentially flat system output. Further contributions deal with the feedforward control of multibody systems. The servo-constraints concept from [4] is extended in [6] to crane systems with the system dynamics directly written in partly redundant coordinates. The method of servo-constraints is compared to the classical input-output normal form approach in [7]. Moreover, an optimal design procedure is proposed to design underactuated multibody systems such that they are minimum phase systems. Methodologies for deriving the inverse dynamics of parallel robots are presented in [8,9]. An algorithm for the real-time solution of the inverse dynamics problem of redundant manipulators is presented in [10], where the redundancy is resolved by least-squares minimization of properly chosen cost functions. In [11], the inverse dynamics of redundantly actuated parallel manipualtors is determined. Thereby, the degrees of freedom resulting from redundant actuation are used to manipulate the prestress in the manipulator in order to either eliminate prestress or to generate a desired end-effector stiffness. An inverse dynamics scheme is coupled to a PD controller in [12] with application to a manoeuvrable tether-net space robot. Thereby, the inverse dynamics is rewritten as an optimization problem. A controller for a slider-crank mechanism with joint friction and joint clearance is developed under the constraint of continuous contact within the joint clearance model in [13]. The control strategy resembles the computed torque method. A framework for solving the optimal control problem for large multibody systems using a direct approach is presented in [14]. The special structure of multibody systems in DAE form is exploited during the symbolic derivation of the necessary conditions and their Jacobian matrices. The optimal control problem for general multibody systems in DAE form is also treated in [15]. Thereby, an energy-preserving scheme is used for direct transcription of the infinitesimal optimal control problem. Examples are given for two-link and three-link manipulators. Feedback control For feedback control, mostly PID-type controllers, feedback linearization, and robust control have attracted attention in the multibody system dynamics community, according to the above-mentioned criteria. These are summarized in the following. PID control Fractional order PI and PD controllers are designed in [16] for a serial manipulator while considering the dynamics of the induction motors. The control parameters are tuned using a particle swarm optimization scheme. A cascaded control loop containing a PID and a fuzzy controller is designed in [17] in order to control an unmanned bicycle. Feedback linearization Feedback linearization is applied to control a space robot capturing a free-floating object in [18]. A kinematic output controller is combined with a feedback linearization scheme in [19] for the trajectory tracking problem of a car with n trailers. Thereby, experimental results are provided for a car with one trailer. Optimal control Nonlinear model-predictive control is applied to a wind turbine in [20]. Thereby, the controller is based on a reduced model with a defect, which estimates unmodeled dynamics using a neural network. The speed-tracking problem of a motorcycle is [21] by optimal linear preview control. In [22], control of flexible multibody systems with piezoelectric sensor/actuator pairs is addressed. Thereby, classical and optimal feedback control strategies are compared for two application examples, namely a cantilever beam and a four-bar mechanism. Robust control Controllers are designed separately for the slow and fast subsystems of a flexible link parallel robot in [23]. A similar approach is taken in [24] for a flexible satellite moving in an orbit. Thereby, variable structure control is used for the slow subsystem and a Lyapunov-based control is applied to suppress vibrations of the fast subsystem. A robust controller based on a μ-synthesis approach is designed in [25,26]. In [25], it is applied to a path-following problem for a bicycle model. In [26], it is applied to reduce stick-slip vibrations in a drill-string, while considering measurement delay. A sliding mode controller is applied in [27] to a cable robot with two moving platforms that is used for open-ocean transfer of cargo. Thereby, the sea conditions are considered as a disturbance on the system. From an application perspective, the above literature overview shows that the interest of the multibody community lies in the control of diverse application examples. Thereby, a focus on cable manipulators, parallel robots, space robots, and all types of vehicles can be identified. This trend is reflected by recent publications on vibration control of vehicles [28], nonlinear control of a parallel machine [29], and a cable manipulator [30]. From a control method perspective, the overview shows that many of the most cited contributions in the journal Multibody System Dynamics focus on feedforward control. In the framework of a two-design degree of freedom control structure shown in Fig. 2, it seems desirable to use a feedforward controller as accurate as possible in order to reduce the effort in the feedback loop. If most of the dynamics of the real system is already compensated by the feedforward controller, simple control strategies can be sufficient for accurate tracking. Accurate feedforward control is especially relevant for multibody systems undergoing large rigid body motion. Thereby, the concept of servo-constraints poses a method to compute inverse models efficiently for large nonlinear multibody systems and it is directly developed in the multibody dynamics community. Overview of servo-constraints The framework of servo-constraints is presented in [4,31] in the context of multibody system dynamics. The method is directly applicable to underactuated multibody systems, which include flexible multibody systems. In the proposed approach, the equations of motion of an (underactuated) multibody system are appended by so-called servo-constraints (also called program or control constraints) in order to enforce the output to stay on a predefined desired trajectory. The resulting DAEs that describe the inverse model can have a higher differentiation index [32]. For example, this is the case for overhead cranes [3] or flexible drive trains [33]. The DAEs can be solved numerically for the feedforward control input, which moves the system on the desired trajectory. Due to the higher differentiation index, various index reduction strategies are applied in context of servo-constraints, such as Baumgarte stabilization [34,35], minimal extension [36,37], and projection [4]. Moreover, a reformulation as an optimization problem is proposed in [38]. The solution strategy for the inverse model DAEs depends on the underlying system class. In this context, multibody systems as general controlled nonlinear systems can be divided into three classes: differentially flat systems, minimum phase systems, and nonminimum phase systems. The inverse model of differentially flat systems is given completely algebraically [39], while the inverse model of minimum phase systems has stable dynamics behavior and the inverse model of non-minimum phase systems has unstable system behavior [40,41]. Most literature on servo-constraints focuses on differentially flat systems, such as cranes [36,42,43], three-dimensional rotary cranes [37,44], and mass-spring chains [33,38,45,46]. Some literature also considers minimum phase systems, such as robotic manipulators with a few links [47,48]. For differentially flat as well as minimum phase systems, the inverse model DAEs resulting from the servo-constraints approach can be integrated forward in time. In the literature, the servo-constraints DAEs are usually solved by the implicit Euler scheme [4,47], but also backwards differentiation formulas are applied [45,49]. Few papers in the literature deal with the application of servo-constraints to nonminimum phase systems, such as robotic manipulators with passive joints or flexible systems [50,51]. Generally, a bounded solution to the inverse model problem of non-minimum phase systems can be computed in terms of stable inversion [52]. This approach is extended to the servo-constraints framework in [50,53]. Experimental results of the concept are shown in [54,55] for a flexible manipulator. The servo-constraints approach is presented in detail in the remainder of the paper and application examples are given for each system type. Modeling and system analysis A mathematical model forms the basis for model-based control. There exist several formulations to obtain the equations of motion of general multibody systems [56]. Here, a very general description is chosen that might be stated in redundant or generalized coordinates or a mixture of both. The considered systems have f degrees of freedom and possibly n c geometric constraints, e.g. describing joints or kinematic loops. This system class includes many of the above mentioned application examples, e.g., flexible multibody systems and cable robots. Fully actuated and underactuated systems are included in the description and the equations of motion arise in the very general forṁ The system is described by either redundant or generalized coordinates y ∈ R n and Z ∈ R n×n describes the kinematic relationship between positions y and velocities v ∈ R n , M ∈ R n×n denotes the mass matrix, k ∈ R n denotes the Coriolis and centrifugal forces, q ∈ R n describes the applied forces acting on the system and B ∈ R n×m distributes the control input u ∈ R m [56]. Equation (3) describes implicit constraints c ∈ R nc , which are enforced by the Lagrange multipliers λ ∈ R nc . The Lagrange multipliers are distributed by the constraint Jacobian C ∈ R nc×n . The system has f = n − n c degrees of freedom. When the kinematics are described by minimal coordinates, it is y ∈ R f and n c = 0. Thus, the constraint equation (3) and reaction forces λ do not occur and the equations of motion reduce to ordinary differential equations (ODEs). It is assumed that the number of system outputs equals the number of systems inputs. The system output z ∈ R m is defined as For later reference, the first and second derivatives of the output arė Substituting the dynamics (2) into equation (6) shows the input-output relationship with the coupling matrix In order to characterize the above model for model inversion and control design, the relative degree and the internal dynamics are of interest. For a single-input single-output (SISO) system, the relative degree r is obtained by differentiating the system output z until the input u appears. Roughly speaking, the number of taken derivatives is the relative degree r of the system [57]. In the SISO case, the coupling term in equation (8) is scalar. If α = 0, equation (7) can be solved for the system input u. Thus, the system has relative degree r = 2. Otherwise, the relative degree is higher and the differentiation of the output must continue until the input appears. A relative degree of r = 2 is common for fully actuated holonomic multibody systems and also occurs for some underactuated multibody systems. The concept of relative degree can be generalized to the multi-input multi-output (MIMO) case in a straightforward manner [40]. Then, the system with m inputs and outputs is characterized by the vector relative degree r = {r 1 , r 2 , . . . , r m } for each input-output channel if the decoupling matrix between input and output channels is regular. For example, a fully actuated robot usually has r = {2, 2, . . . , 2}. A derivation of the internal dynamics is here briefly outlined for SISO systems written in ODE form, with n c = 0 [40]. Deriving the internal dynamics directly for DAEs is, for example, discussed in [58,59]. The internal dynamics can be extracted by applying a nonlinear coordinate transformation to the input-output relationship. Thereby, the new coordinatesx ∈ R 2n are chosen as the output z, its first r − 1 derivatives and the 2n − r remaining coordinates η. The new coordinates are collected in the vector The coordinates η ∈ R 2n−r describe the internal dynamics and are chosen such that the coordinate transformation is at least locally diffeomorphic. As a rule of thumb, the unactuated coordinates are a good choice for the coordinates η if the system output function h(y) contains all actuated coordinates [2,60]. The input-output normal form in new coordinatesx isẋ Thereby, the termsᾱ(x) andβ(x) are expressed in the new coordinatesx. The functions ρ(x) and σ (x) are given by the coordinate transformation for a specific choice η. The inverse model for trajectory tracking of the desired trajectory z d (t) can be extracted from this input-output normal form. For this purpose, the desired trajectory z d (t) is substituted into equation (10), and the rth part is solved for the desired control input The termᾱ is nonzero by definition of the relative degree. Substituting the desired system input into the last 2n − r equations of equation (10) yields the internal dynamicṡ which are driven by the desired trajectory z d (t). Equations (11) and (12) form the inverse model. Stability analysis of the driven internal dynamics (12) is difficult and therefore usually performed in terms of the zero dynamics [40]. Zero dynamics is derived by zeroing the output z(t) = 0 and its derivatives for all t , such thaṫ Eigenvalue analysis of the linearized zero dynamics of the forṁ determines the stability around the equilibrium point η eq . Thereby, η denotes small variations around the equilibrium point η eq of the zero dynamics. Systems with stable internal dynamics are called minimum phase, while systems with unstable internal dynamics are called non-minimum phase systems. In case the relative degree is r = 2n, the system does not have internal dynamics and is differentially flat. The described approach can be generalized to MIMO systems in a straightforward manner [2,40]. General framework of servo-constraints In the classical analytical approach, the input-output normal form provides an inverse model in terms of equations (11) and (12). However, this approach might not be applicable in a straightforward manner for complex multibody systems. For example, it might not be possible to analytically find a state transformation . Therefore, the method of servo-constraints is developed, which describes the inverse model problem in DAE form. Thereby, the socalled servo-constraints, as an extension of classical geometric constraints in multibody system dynamics are introduced which constrain the system output to the desired trajectory z d (t). The servo-constraints s ∈ R m append the system dynamics (1)-(3) to obtain the DAEṡ which form the inverse model. Solving equations (16)- (19) yields the trajectory of all generalized coordinates y, v and the feedforward control input u ffw . In case the internal dynamics (12) is unstable (non-minimum phase system), the inverse model problem (16)- (19) cannot be solved by forward integration and stable inversion must be applied. Equations (16)-(19) have a similar structure compared to the forward dynamics (1)-(3). The Lagrange multipliers λ enforce the geometric constraints c, while the system input u enforces the servo-constraint s. While the matrix C is the Jacobian of the geometric constraints, the input distribution B is in general not the Jacobian of the servo-constraints. Therefore, the input does not necessarily act perpendicular to the constraint manifold. Different realizations of the servo-constraints DAEs are analyzed in [4]. They depend on the properties of the coupling term α defined in equation (8). The inverse model is in ideal realization if the input B is orthogonal to the tangent of the constraint manifold, see Fig. 3(a). When the input includes directions in the orthogonal as well as the tangential direction, the system is in non-ideal configuration, see Fig. 3(b). Both configurations are characterized by the full-rank condition rank(α) = m. If the matrix α has reduced rank, 0 < rank(α) < m, the system is in mixed orthogonal-tangential realization. For rank(α) = 0, the inverse model is in tangential configuration, see Fig. 3(c), and no input influences the system output directly. Due to these properties, the differentiation index of the DAEs (16)- (19) is larger than three for a singular matrix α. Note that the computation of the differentiation index resembles the same steps as the relative degree defined above. In most cases, the differentiation [4,53] index is larger by 1 than the relative degree [32]. A close relationship between both concepts is important in the context of servo-constraints. This can be seen by assuming a so-called collocated output. In this case, the input and output occur at the same point and it is B = H T . Assuming perfect tracking of the output, the collocated input-output pair can be seen as a rheonomic constraint on the dynamics. Thus, the control input corresponds to the Lagrange multipliers enforcing the rheonomic constraint and the problem has differentiation index 3. However, this is in general not true for other outputs. Differentially flat and minimum phase systems For differentially flat systems and minimum phase systems, the inverse model DAEs (16)- (19) can be integrated forward in time to obtain the feedforward control input u ffw . Ideally, the solution is computed in real-time in order to implement the strategy on the real system. Then, it can be adapted to varying trajectories or varying model properties. Numerical methods The arising DAEs (16)- (19) can have a high differentiation index. Therefore, suitable index reduction strategies should be applied in order to simplify the numerical condition of the problem. Regarding the numerical integration, suitable DAE solvers must be chosen. Thereby, the need for a fast and exact solution must be balanced. The necessary tracking accuracy depends on the specific application and other arising disturbances, such as friction in the actuators. Since feedback control is always added in terms of the control strategy in Fig. 2, the feedback control can compensate external disturbances as well as model uncertainties and numerical errors in the model inversion. The servo-constraints DAEs are typically solved by the implicit Euler scheme, since it yields a simple and stable implementation. For differentially flat systems, this is usually sufficient because the inverse model is an algebraic system. Therefore, the numerical damping of the implicit Euler scheme is not as relevant. However, for minimum phase systems the inverse model is a dynamic system itself. In this case, the implicit Euler scheme might damp out the dynamics and the solution must be closely monitored. Therefore, higher-order schemes such as higher-order Runge-Kutta methods or backwards differentiation formulas (BDF) are proposed in this case. Application examples Two application examples are given in order to demonstrate the solving process and the real-time applicability. For a cable robot, experimental results using higher-order integration schemes are given. These demonstrate the real-time capabilities of the approach. For a three-dimensional manipulator with a passive joint, simulation results demonstrate the application of servo-constraints on a complex three-dimensional system. It is shown that the implicit Euler scheme damps out the internal dynamics, while higher-order schemes result in accurate tracking and thus provide a superior feedforward control. Cable robot The cable-robot model represents an experimental setup at the Institute of Mechanics and Ocean Engineering at Hamburg University of Technology, see Fig. 4(a). The experimental setup consists of a trolley that can move in a range of 13 m and a load platform connected by four cables with a motion range of 9 m. In the following, the cables are operated synchronously. The multibody model has therefore f = 3 degrees of freedom and m = 2 inputs, which represent the trolley actuator and the winch actuator. The system output z is the position of the load platform. The system has a vector relative degree r = {4, 2}. Therefore, the cable robot is differentially flat and no internal dynamics remain. Refer to [3,53] for details on the experimental setup and the experimental results. The inverse model DAEs (16)- (19) are solved for the desired system input in real-time. For this purpose, higher-order integration schemes are implemented in the real-time environment. The 4-step BDF scheme and the Runge-Kutta Scheme Radau IIA of order 5 are compared. For the shown experimental results, the desired trajectory is chosen as a smooth transition from the initial trolley position x T = 15 m and trolley length L = 4 m to the final position x T = 11 m, L = 7 m. The desired trajectory and measurement data is shown in Fig. 5(a). It can be seen that without any external disturbances, the tracking is nearly perfect. Thereby, both integration schemes result in similar accuracy, which is sufficient for tracking. The control loop runs at a frequency of 100 Hz and the solution is computed in each time step. The computation times are measured on the experimental setup and are shown in Fig. 5(b) for the BDF scheme and in Fig. 5(c) for the Runge-Kutta scheme. It can be seen that both methods lie well below the available time of 10 4 µs, while the BDF scheme is approximately 5 times faster. Accurate feedforward control is normally supplemented by additional feedback control. In the following, the feedforward control is combined with a linear quadratic regulator (LQR) in accordance to the control structure shown in Fig. 2. For demonstration purposes, an initial position error of x T = 0.5 m is introduced. The experimental results are shown in Fig. 6. The feedback controller detects the initial position error and quickly tries to minimize the error by actuating the trolley. This results in sway motion of the load platform that is reduced during the transition time and is then minimized at the final position of the trajectory. A simple feedback strategy is sufficient for this control problem, since the feedforward controller already moves the system close to the desired trajectory, such that the system can be regarded as linear in the vicinity of the desired trajectory. Three-dimensional manipulator with one passive joint While the cable robot is differentially flat, there exist many common multibody systems with internal dynamics, e.g., flexible systems. In this case, special care must be taken during solver selection in order to reflect the internal dynamics accurately. In the following, numerical results are shown for the three-dimensional manipulator with one passive joint shown in Fig. 7. The passive joint is a simple approximation of the first oscillation mode of a flexible link. The considered system consists of four links. The first link is in the vertical orientation. It is actuated by input u 1 and rotates around the z-axis, denoted by the angle . The second and third link are actuated by input u 2 and u 3 , respectively, and rotate around the y -axis denoted by the angles α and β, respectively. The fourth link is connected by a spring-damper combination and also rotates around the y -axis denoted by angle γ . The system has f = 4 degrees of freedom and m = 3 inputs. The system output is defined as the end-effector position in the inertial coordinate frame K. The vector relative degree of the system is r = {2, 2, 2} and internal dynamics of dimension 2 remain. The internal dynamics can be derived analytically as shown for a planar manipulator with one passive joint in [7]. The internal dynamics is unstable for a homogeneous mass distribution of the links. The mass distribution is optimized in [7,62] to obtain stable internal dynamics. This is realized by adding counter weights to the links. The optimized simulation parameters for minimum phase behavior are listed in Table 1. The following model inversion results demonstrate the real-time capabilities of the servoconstraints approach in the case of internal dynamics and three-dimensional system behavior. The desired trajectory is a circular path shown in Fig. 8. The initial state vector y 0 is Fig. 9. The dashed line denotes the final time of the trajectory. Regarding the system input u, oscillations are visible after the end of the trajectory, see Fig. 9(a). These oscillations compensate the motion of the internal dynamics γ , see Fig. 9(d). The corresponding states are shown in Fig. 9(b) and the system output is shown in Fig. 9(c). The desired output trajectory is tracked perfectly and the system is at rest at the end of the trajectory. This shows that the oscillations of the internal dynamics are not observable from the output. However, an accurately com- Fig. 8 Visualization of the three-dimensional manipulator in the initial position y 0 and the desired trajectory z d (t) [53] puted system input is necessary to compensate the motion of the internal dynamics. Often, an implicit Euler scheme is chosen to solve the inverse model DAEs. Solving this application example with internal dynamics with the implicit Euler scheme damps out the oscillations of the internal dynamics, see Fig. 9(d). Therefore, it is not sufficient for accurate tracking and higher order integration schemes should be applied in the case of internal dynamics. The real-time capabilities of the approach are now demonstrated by solving the inverse model problem with different BDF schemes and measuring the total computation time t calc . The bottleneck of the computation lies in solving the nonlinear equations from the BDF scheme using Newton's method. In order to analyze the speed, different Jacobian approximations are compared. These include the precomputed analytic Jacobian J ana , the Jacobian J broy approximated by Broyden's method [63], and the numerical Jacobian J num approximated by finite differences. For analyzing the accuracy after computing the feedforward control input u ffw , it is applied to the system in a forward simulation, using the MATLAB solver ode15s. The maximum tracking error between simulated output z sim and the desired inverse model output z d is e max = max z sim (t)−z d (t) . The error e max converges based on the given convergence order of the respective BDF scheme, see Fig. 10. The implicit Euler scheme, i.e., k bdf = 1, converges with first order and yields comparably large tracking errors due to numerical damping shown above. The other schemes yield accurate tracking, while numerical rounding errors start to influence the solutions for step sizes t < 1 ms for k bdf = {3, 4, 5}. In this case, the convergence in stalled and further reducing the step size will yield larger errors. For each BDF scheme, all Jacobians yield similar tracking errors respectively. However, Broyden's method does not converge for step sizes t ≥ 5 ms and no results are given. The computation time is shown in Fig. 10(b), where the bold horizontal line denotes the simulation time and therefore the real-time barrier. Afterwards, the real-time applicability has to be ensured by checking each time step and implementing the scheme on an experimental setup. Nevertheless, this comparison gives a good estimate of the real-time capabilities. The results show that for a given Jacobian approximation, all BDF schemes result in similar computation times. The analytical Jacobian J ana yields smallest computation times and is real-time capable for t ≥ 0.75 ms. Broyden's method is slightly slower and is real-time capable for t ≥ 2.5 ms. The numerical Jacobian J num is approximately 8 times slower compared to the analytical one and is real-time capable for t ≥ 10 ms. These results demonstrate that Broyden's method can be a convenient alternative if the analytical Jacobian is not available. Moreover, the results show that the inverse model DAEs based on servo-constraints can be real-time capable for a three-dimensional system with internal dynamics. Non-minimum phase systems For non-minimum phase systems, the inverse model DAEs (16)- (19) cannot be integrated forward in time. This is not only of theoretical relevance, but many typical multibody systems are non-minimum phase systems. For example, flexible manipulators are usually nonminimum phase if the end-effector is considered as an output. In this case, the stable inversion problem must be considered and is reviewed in the following. Stable inversion formulated as boundary value problem Stable inversion is proposed originally for ODEs describing the internal dynamics explicitly, i.e., in the form of equations (11)-(12) [52]. It is applicable in case the internal dynamics has an hyperbolic equilibrium point. In order to compute bounded desired system inputs, a boundary value problem is formulated for the explicitly given internal dynamics. The coordinates η of the internal dynamics are defined to start on the unstable manifold of the equilibrium point η eq,0 of the internal dynamics at the beginning of the trajectory z d (t) and to end on the stable manifold of the equilibrium point η eq,f at the end of the trajectory. The stable and unstable manifolds are locally approximated by their stable and unstable eigenspaces, respectively. The boundary conditions are then Thereby, T 0 and T f denote the initial and final simulation time and the matrices B s ∈ R n s ×(2n−r) and B u ∈ R n u ×(2n−r) contain the eigenvectors of the stable and unstable eigenspaces respectively. It is shown in [50] that the stable inversion procedure can also be applied directly to the inverse model described by the DAEs (16)- (19). Respective boundary conditions are then posed in terms of the redundant coordinate vector. Solving the boundary value problem yields a bounded, but noncausal system input u ffw . It consists of a pre-and postactuation phase in order to induce motion in the internal dynamics before the start of the trajectory at t 0 and to bring the internal dynamics to rest after the end of the desired trajectory at time t f . In order to capture this pre-and postactuation phase, the boundary value problem is solved on the time interval T 0 , T f with T 0 ≤ t 0 and T f ≥ t f respectively. Deriving the boundary conditions (20)-(21) poses some difficulties in realistic and more complex problems, since the derivation of the internal dynamics (12) is not straightforward. Stable inversion formulated as an optimization problem In order to avoid the definition of the boundary conditions (20)-(21), a reformulation of the stable inversion problem as an optimization problem is proposed in [51,64]. The optimization problem is defined as Fig. 11 Overview of methods to solve the stable inversion problem [53] subject to the internal dynamics (12). In the aforementioned references, the cost functional J is, for example, chosen as to ensure a bounded solution. It is shown in [51] that the solution of the optimization problem (22) converges to the solution to the boundary value problem as the interval [T 0 , T f ] goes to infinity. The optimization problem can be formulated equivalently for the servoconstraints DAEs (16)- (19), see [64]. The optimal control problem is an infinite-dimensional problem due to the cost functional (23). Direct methods or indirect methods are available for solving the optimization problem [65]. In the context of stable inversion, the optimal control problem is so far solved by direct methods, such as direct transcription [64,66] or multiple shooting [51]. Discretizing the constraints and the cost function results in a finite-dimensional optimization problem that can be solved by optimization algorithms. Figure 11 shows an overview of the various approaches to the stable inversion problem. There are six different formulations of the model inversion. On the one hand, there are the inverse models based on the explicitly derived internal dynamics (12) and solved as a boundary value problem (bvp-ode), a directly solved optimization problem (opt-ode) and an indirectly solved optimization problem (idopt-ode). On the other hand, there are the inverse models based on the servo-constraints DAEs (16)- (19) and solved as a boundary value problem (bvp-dae), a directly solved optimization problem (opt-dae) and an indirectly solved optimization problem (idopt-dae). Comparison of stable inversion methods For a numerical comparison, all six methods are applied to the two-link robot with one passive joint shown in Fig. 12. The angles α and β describe the rotation of the first and second link respectively. The system input is a torque applied on the first link, while the system output z is the angle between the end-effector and the horizontal line. The system has relative degree r = 2 and is non-minimum phase for a homogeneous mass distribution of the two links. The coordinate β describes the internal dynamics. The internal dynamics are derived analytically in [47]. Here, the numerical accuracy and computation times are compared for the stable inversion methods. Thereby, the boundary value problems are solved using finite differences with Simpson discretization. The optimization problems are solved using the MATLAB solver fmincon. For convergence analysis, the reference solution is computed with the MATLAB solver bvp4c for the case bvp-ode and using a very small step size. The simulation parameters are chosen as L 1 = L 2 = 0.5 m, m 1 = m 2 = 0.05 kg, d = 2.5 × 10 −5 N ms rad and k = 0.5 N m rad . The desired trajectory is a smooth transition from z = 0 • to z = 30 • , see Fig. 13(a). The phase diagram of the coordinate β in Fig. 13(b) shows the effect of the boundary conditions. The trajectory starts on the eigenspace E U f approximating the unstable manifold and reaches the equilibrium in the direction of the eigenspace E S f , which approximates the stable manifold. The convergence of the maximum error e max = max |u ref (t) − u d (t)| is shown in the upper diagram of Fig. 14. All solution approaches show a similar convergence behavior of order 4. For this application example, the solution accuracy cannot be improved for time steps below t = 0.03 s. On the other hand, there are differences in the computation times. Both direct optimization schemes opt-ode and opt-dae have larger computation times compared to the other formulations. Since the remaining four approaches have comparably computation times, it is proposed to use the simplest approach. From a practical point of view, the method bvp-dae represents the simplest approach since it does not depend on deriving the internal dynamics explicitly and no adjoint variables must be solved for. Conclusion The contribution of this paper is two-fold. The first part in Sect. 2 gives an overview about the control-related literature in the journal Multibody System Dynamics during the past 25 years. It is shown that the control-related contributions have increased over the past 25 years. The most cited contributions are summarized in order to show active fields of research and to highlight the relevant control problems for the multibody community. As can be seen from the literature survey, trajectory tracking control is one major research focus. This control problem is challenging, especially for multibody systems, since multibody systems often exhibit large nonlinear motion. This explains the active research in this area. One important aspect for the trajectory-tracking control of complex multibody systems is the design of an accurate feedforward control strategy. Therefore, the second part in Sects. [3][4][5][6] gives an overview about the method of servo-constraints, as it is a framework for feedforward control that is directly developed from the MBS community. It is an elegant way to compute an inverse model, which can be directly used as feedforward control. Servo-constraints enforce the system output to be identical to the desired trajectory. The arising DAEs have similarities to the DAEs describing the forward dynamics of general multibody systems. However, the differentiation index of the inverse model DAEs can be much higher than 3. Therefore, numerical solution strategies must be chosen with great care. Before applying servo-constraints, the system properties must be analyzed, since the solution strategy depends on the underlying system type. In order to give a comprehensive overview of the method, it is analyzed for all possible system types. For differentially flat and minimum phase systems, the servo-constraints DAEs can be integrated forward in time. The application example of an overhead crane is used to demonstrate real-time capabilities on an experimental setup. The application example of a three-dimensional robotic manipulator is used to demonstrate numerical solution properties for a complex minimum phase system. For non-minimum phase systems, a boundary value problem must be solved. In the literature, different mathematical formulations exist. Here, they are compared in a numerical study for a two-link manipulator with one passive joint. While all formulations can be solved with comparable accuracy, there are differences in the efficiency. From a practical point of view, the approach describing the inverse model as servo-constraints DAEs that are solved by a BVP represents the simplest approach since it does not depend on deriving the internal dynamics explicitly and no adjoint variables must be solved for. With the unbroken trend for more efficiency, the development of new control strategies will continue to play an important role in future research. For feedforward control, the inverse model accuracy can be further enhanced, e.g., by adding data-driven approaches, by including multiphysics effects and by taking into account more flexible mode shapes.
v3-fos-license
2016-05-12T22:15:10.714Z
2013-10-16T00:00:00.000
54883
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0077691&type=printable", "pdf_hash": "5815ffbcd752d9b64c3e8a230618484b0ce102e3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41744", "s2fieldsofstudy": [ "Medicine" ], "sha1": "69036d8dfda4a5bbe368105711deac5e219ac395", "year": 2013 }
pes2o/s2orc
Adherence as a Predictor of the Development of Class-Specific Resistance Mutations: The Swiss HIV Cohort Study Background Non-adherence is one of the strongest predictors of therapeutic failure in HIV-positive patients. Virologic failure with subsequent emergence of resistance reduces future treatment options and long-term clinical success. Methods Prospective observational cohort study including patients starting new class of antiretroviral therapy (ART) between 2003 and 2010. Participants were naïve to ART class and completed ≥1 adherence questionnaire prior to resistance testing. Outcomes were development of any IAS-USA, class-specific, or M184V mutations. Associations between adherence and resistance were estimated using logistic regression models stratified by ART class. Results Of 314 included individuals, 162 started NNRTI and 152 a PI/r regimen. Adherence was similar between groups with 85% reporting adherence ≥95%. Number of new mutations increased with increasing non-adherence. In NNRTI group, multivariable models indicated a significant linear association in odds of developing IAS-USA (odds ratio (OR) 1.66, 95% confidence interval (CI): 1.04-2.67) or class-specific (OR 1.65, 95% CI: 1.00-2.70) mutations. Levels of drug resistance were considerably lower in PI/r group and adherence was only significantly associated with M184V mutations (OR 8.38, 95% CI: 1.26-55.70). Adherence was significantly associated with HIV RNA in PI/r but not NNRTI regimens. Conclusion Therapies containing PI/r appear more forgiving to incomplete adherence compared with NNRTI regimens, which allow higher levels of resistance, even with adherence above 95%. However, in failing PI/r regimens good adherence may prevent accumulation of further resistance mutations and therefore help to preserve future drug options. In contrast, adherence levels have little impact on NNRTI treatments once the first mutations have emerged. Introduction Combined antiretroviral therapy (ART) aims at continuous and lasting suppression of viral replication, which is one of the most important factors influencing long-term prognosis of HIVinfected individuals [1,2]. The importance of adherence to ART has increased as treatment of HIV at present requires life-long therapy once initiated. Non-adherence to therapy has been shown to be one of the strongest predictors of failure of ART [3,4]. Long-term viral suppression requires very high if not perfect adherence, however recent studies have shown that the majority of patients on potent current regimens are able to maintain viral suppression at adherence rates lower than 95% [5][6][7][8]. Virologic failure is associated with increased risk of emergence of drug resistance [9] and therefore reduces future treatment options and long-term clinical success [10,11]. Studies of the relationship between adherence and resistance in HIV were only conducted a few years ago and indicate that the relationship is more complicated than originally thought, with each drug class having a unique adherence-resistance relationship [12][13][14][15][16][17]. The adherence-resistance relationship in historic monotherapy regimens containing a single unboosted protease inhibitor (PI) or a non-nucleoside reverse transcriptase inhibitor (NNRTI) is thought to be similar. Studies showed that most drug resistance mutations were occurring in individuals with adherence above 90% [10,[18][19][20]. A subsequent mathematical model of PI regimens determined that the maximal resistance occurs at 87% adherence and declines only modestly with perfect adherence [21]. This degree of adherence is low enough to allow for viral failure while high enough to exert selective pressure for resistant virus to emerge. Ritonavir boosted PI regimens allow for more potent viral suppression than ritonavir unboosted PIs and this reduces the emergence of resistance mutations. Boosting increases the half-life of the PI and so PI concentrations remain in a suboptimal therapeutic range for a briefer time during periods of non-adherence [18]. Resistance to PIs usually requires multiple mutations and thus exhibit a high genetic barrier; therefore high level resistance requires both ongoing viral replication and sufficient drug exposure to create a selective advantage for drug-resistant virus [22]. For NNRTIs, resistance is associated with interruptions in therapy [23] and develops at a lower level of adherence than PI resistance [24]. In addition it has been shown that minority variants harbouring drug resistance mutations can jeopardize therapeutic success of NNRTI but not boosted PI containing regimens [25][26][27]. Unlike most PI drugs, resistance to the NNRTIs nevirapine and efavirenz requires only a single mutation at the K103N codon and even a single dose of NNRTI monotherapy can result in resistance [28]. In addition, NNRTIs have long half-lives allowing the virus to replicate in the presence of low but detectable plasma levels in the case of consecutive missed doses. Resistance mutations are common in patients with any level of adherence that is insufficient for full viral suppression but almost absent in highly adherent patients. The clinical implications of NNRTI resistance are considerable since NNRTI resistance almost universally confers crossresistance to first generation NNRTIs and drug resistance mutations persist due to low fitness cost in most cases even after drug discontinuation [29]. The potency of the ART regimen, defined as the likelihood to suppress HIV-1 viremia below the limits of standard assay detection for prolonged periods of time, is the largest single determinant of the development of resistance for all ART classes. The fitness cost of resistance and the genetic barrier to resistance are important but they matter most during active viral replication [13]. Therefore, complete viral suppression and consequently optimized adherence are the undisputed goals of therapy. Adherence patterns of individuals can also have public health implications through the spread of drug-resistant strains of HIV to uninfected or drug-naïve individuals, limiting their future treatment options [30,31]. The goal of the study is to quantify the impact of adherence to ART on the development of class-specific resistance mutations in patients starting a new class of ART. Selection of patients and genotypic drug resistance tests Patients for this study were selected from the SHCS, which is a nationwide observational study of HIV infected individuals in medical care in Switzerland. In semi-annual visits, laboratory measurements are performed and clinical questionnaires are administered [32]. The genotypic drug resistance data stem from the SHCS drug resistance database, which is a central, anonymized collection of all genotypic drug resistance tests ever performed on SHCS enrolees by one of the four authorized laboratories in Switzerland [16], complemented by a systematic retrospective collection of pre-treatment genotypes [33]. The data are stored in SmartGene's Integrated Data Network Services tool (Version 3.6.1). The SHCS drug resistance database was screened for genotypic drug resistance tests (GRT) which were performed while patients were receiving ART and to which an adherence assessment could be linked. In particular, GRTs/patients were included if the following conditions were fulfilled ( Figure 1): Patients had to be either 1) therapy naïve individuals initiating either a ritonavir-boosted PI (PI/r) or a NNRTI regimen or 2) if having failed a first regimen with an on-treatment HIV RNA >500 copies/mL after 24 weeks of therapy, they must have initiated a second regimen with two fully active drugs (corresponding to a Stanford genotypic sensitivity score < 2) that contained a different third drug class (i.e., NNRTI or PI/r) than the initial regimen. Moreover, GRT included in the analysis had to be performed after a minimum ART exposure duration of 30 days, be linked to a completed adherence questionnaire within 180 days of the genotype, and an HIV RNA measurement must have been obtained on the same treatment regimen. These conservative selection criteria were designed to maximize sample size while only measuring newly emerging drug resistance mutations against antiretroviral therapies that were functional at the time of initiation. We further aimed to exclude salvage therapies, because the origin of mutations and consequently the impact of adherence can often no longer be assessed accurately in this population. Hence, GRTs performed on patients receiving integrase inhibitors or CCR5 entry inhibitors were also excluded from this analysis because in Switzerland these drugs were only used in trials of salvage treatments during the observation period. Adherence. On twice-yearly study visits, levels of selfreported adherence are assessed via interview with the clinician. Patients are asked about the frequency of missed drug doses over the past four weeks (never, less than once a month, monthly, twice a month, weekly, more than once a week, daily) and whether consecutive doses of ART were missed. Using the first question plus the dosing frequency of the regimen, the percentage adherence was calculated and then categorized into 100%, 95-99%, and <95%. We considered the inclusion of both adherence measures (percentage adherence and consecutive missed doses) together as well as separate in all models. The predictive value of the adherence questions and non-adherence definitions with regard to viral load levels have been demonstrated previously [3]. Statistical analysis The primary study outcome was the new emergence of any International AIDS Society -USA drug resistance mutations while receiving antiretroviral therapy [34]. The primary explanatory variable of interest was adherence to antiretroviral therapy over the past 6 months, stratified into the following categories: 100%, 95-99%, and<95% adherence. In order to test the independent association of adherence with emergence of drug resistant mutations, multivariable models were constructed adjusting for the following pre-specified potential confounders: age, gender, active or prior injecting drug use, AIDS-defining events, baseline CD4 cell count, baseline HIV-1 RNA, and the nucleoside reverse transcriptase inhibitor (NRTI) backbone combination (in particular the inclusion of 3TC, a low genetic barrier drug) and number of previous regimens. Univariable and multivariable logistic regression models were performed separately for NNRTI and PI/r. Unless a significant departure from linearity was detected by likelihood ratio tests, levels of adherence were primarily fitted as linear trends across the pre-defined strata. In addition to the analysis of any new IAS-USA mutations, further sub-analyses considered specific types of mutations, which were any major mutations to the group-specific third drugs (i.e. either PI/r or NNRTI) and the new emergence of M184V mutations. The latter mutation was selected due to its low genetic barrier and the widespread use of M184V selecting drugs in Switzerland. These sub-analyses were restricted to individuals with the appropriate drug exposures, which selected for the mutations of interest. As a secondary outcome, we compared levels of plasma HIV RNA around the time of the adherence assessment across the different strata of medication adherence. Because a number of included patients lacked information from prior genotypic drug resistance testing (Table 1) we performed sensitivity analyses by repeating our analyses on the subset of individuals who either received their first ART ever or those who started a new line of ART (including one new drug class) and who had a prior on-treatment genotypic resistance test done. All analyses were done using SAS v9.2 (SAS Corporation, Cary, NC, USA) and Stata v12.0 (StataCorp LP, College Station, TX, USA). All p-values are two-sided and the threshold for statistical significance was set at 0.05. Among the most notable differences between the two groups were the somewhat higher proportion of individuals who have acquired HIV via injecting drug use in the PI/r group and the lower proportion of individuals in the PI/r group who had previously experienced virological failure on antiretroviral therapy. Levels of adherence were similar between the PI/r and NNRTI groups, and approximately 85% of individuals reported adherence levels >95% in both groups. Due to the limited number of cases, all adherence strata <95% were collapsed into one group for further analyses. Genotypic sensitivity score As shown in table 2, levels of drug resistance were considerably lower in the group of PI/r recipients -for example, resistance to any IAS-USA mutations was detected in 56 (34.6%) of NNRTI recipients versus 14 (9.2%) of PI/r recipients (p<0.001). These findings were confirmed by sensitivity analyses (Table S1). Indeed, the Stanford cumulative genotypic sensitivity scores of the drug combination at the time of genotyping (i.e. the number of active drugs) were generally higher in the PI/r group (median [IQR], 3 ) than in the NNRTI group (2 [1-3], Wilcoxon rank sum p<0.001; data not shown). Results for the sensitivity analysis were identical (data not shown). Moreover, when comparing cumulative genotypic sensitivity scores upon failure across adherence strata, there was a trend for lower scores (i.e. more resistance) with lower adherence in the PI/r group (reduction of -0.09 [-0.20; 0.01] score points with decreasing adherence, p=0.074, data not shown). No such trend was observed for the NNRTI group. Potentially those results in the PI/r group could have been influenced by previous virological failures, which may have affected genotypic sensitivity scores for NRTI drugs (not PI/r or NNRTI scores, however). Nevertheless, those associations between GSS and adherence in the PI/r were still observed when repeating the analysis on the set of individuals starting their first antiretroviral therapy (-0.17 [-0.30; -0.03], p=0.016; data not shown). Associations of emergence of drug resistance with adherence Descriptive analyses shown in Table 2 indicate that adherence may indeed influence the probability of detecting IAS-USA mutations by genotypic testing. In the group of NNRTI recipients there was a steady increase in the proportion of tests with resistance mutations for all three outcomes -the presence of any mutations, NNRTI mutations, or M184V mutation. This trend persisted even in an analysis of adherence categorized into the five original strata (not shown). Increases in proportions of resistant viruses with lower adherence levels were observable when analysing any IAS-USA mutations or M184V as outcomes, but not with the emergence of PI mutations. However, when performing sensitivity analyses with a more strictly defined population who either had started their first ART or with a prior on-treatment GRT available (Table S1) those associations of adherence levels with emergence of new NNRTI or M184V mutations were no longer apparent, possibly owing to the lower sample size. Kaplan-Meier curves for the unadjusted association between adherence and development of mutations were estimated ( Figure 2). These observed adherence-resistance relations were further tested for statistical significance in univariable and multivariable logistic regression models. In univariable analyses, the only outcome showing a statistically significant association with adherence in the PI/r group was the emergence of the M184V mutation (n=137 patients on 3TC or FTC backbone, odds ratio [95% confidence interval]: 4.98 [1.41-17.62]) per increase in adherence stratum (Table 3). This estimate should be interpreted with caution, however, because the M184V mutations did not occur in the middle adherence stratum. In multivariable models, adherence was still significantly associated with development of a new M184V mutation (8. 38 Table 4, the emergence of any IAS-USA mutation remains significantly associated with adherence (1.66 [1.04-2.67]). In addition, the association between adherence and NNRTI mutations becomes significant after adjustment for confounders (1.65 [1.00-2.70]). In multivariable models for any IAS-USA mutation and NNRTI mutation, currently being on a regimen with ABC, TDF or didanosine (ddI) resulted in a decreased risk for the development of mutations. The majority of those treatments contained TDF (143 of 204, 70%), which is more potent, can be given once daily, is better tolerated than zidovudine (AZT) [35], and hence may have led to fewer problems with adherence and less drug resistance [36]. Associations of HIV RNA and adherence We further investigated the impact of adherence on levels of HIV RNA measured at time of genotypic testing in our sample. In the PI/r group, average levels of HIV RNA log 10 copies/mL (standard deviation) were increasing with lower adherence and reached 2.8 (1.5) log copies/mL in the 100% adherence group, 3.2 (1.5) log copies/mL in the 95-99% group and 3.7 (0.9) log copies/mL in the <95% group. Consequently, there was a statistically significant linear trend of 0.5 [95% confidence interval 0.1-0.8] log 10 copies/mL per lower adherence stratum (R 2 =0.053; F-test p=0.004). No such relationship was observed in the NNRTI group, where HIV RNA levels remained approximately constant across all adherence strata (in descending order) with 3.6 (1.3), 3.6 (0.9), and 4.0 (0.9) log 10 copies/mL. Neither of these adherence group means differed significantly (F-test p=0.33). The sensitivity analysis yielded almost identical results. For the PI/r group a decrease in adherence was associated with a 0.46 [95% confidence interval 0.1-0.8] log 10 copies/mL increase in HIV RNA, whereas no association was observed for the NNRTI group. Durability of regimens We compared the durability of the regimens and found the viral failure rate to be significantly higher in those on NNRTI compared to PI (78.0% vs. 47.2%, p<0.001). In those who experienced virological failure, the average durability of the regimen was 21 and 15 months in NNRTI and PI/r respectively (p=0.039) Within the NNRTI group, the average durability of those on regimens with EFV was slightly longer than those on NVP (22 versus 19 months) but the difference was not statistically significant. Discussion In this carefully selected data set of genotypic drug resistance tests and adherence measurements from individuals treated with potent combination antiretroviral therapies, we observed associations of NNRTI drug resistance emergence with levels of prior self-reported adherence, while associations for PI/r regimens varied according to type of resistance mutation. Therapies containing boosted PI seemed more forgiving to incomplete compliance compared with NNRTIbased therapies, which allowed much higher levels of resistance emergence, even at adherence levels of 95% and higher. In the NNRTI group, levels of resistance defined by the presence of at least one IAS-USA mutation increased markedly from the highest (29%) to the lowest adherence stratum (50%). Not surprisingly, many of those individuals from the NNRTI group with at least 1 new mutation also harboured NNRTI resistant viruses (ranging from 83% -87%, Table 2), which , which is known to emerge rapidly and to confer full resistance to EFV and NVP. In contrast, there was little statistical evidence for increases in resistance levels at lower adherence strata when a boosted PI was used, although these analyses were somewhat limited in power due to the low frequency of resistance emergence on boosted PI regimens in general. Further noteworthy, we found associations of HIV RNA levels with adherence strata in the PI/r group, but not in the NNRTI group. Most likely, this observation is tied to the fact that a single NNRTI mutation can render NNRTI-drugs like efavirenz or nevirapine impotent, whereas several mutations are usually required to have a strong impact on PI susceptibility. Therefore, this similarity in HIV RNA levels across adherence strata in the NNRTI group possibly reflects the higher degree of resistance in NNRTI regimens and consequently the lower residual efficacy of those treatments. This notion is supported by the finding that the Stanford cumulative genotypic sensitivity scores of the drug combination at the time of genotyping were generally higher in the PI/r group than in the NNRTI group and that there was a trend for lower scores (i.e. more resistance) with lower adherence in the PI/r group but not in the NNRTI group. Taken together, these observations suggest that good adherence may be beneficial even with virologically failing PI/r regimens, because this may prevent the accumulation of further resistance mutations and can therefore help to preserve future drug options. In contrast, adherence levels seem to have little impact on NNRTI treatments once the first mutations, and NNRTI mutations in particular, have emerged [37]. Our data are largely consistent with previous studies showing that antiretroviral classes may have different adherence-resistance relationships. Gallego et al [38] found resistance in PIs was limited to individuals reporting more than 90% adherence. Parienti et al. found that NNRTI resistance is associated with interruptions of therapy [22]. Sethi et al. [23] and Maggiolo et al [14] found resistance occurring in those on NNRTIs at lower levels of adherence than that observed in patients who develop resistance to PIs. Longitudinal studies by Bangsberg et al. and Miller et al. found that increasing adherence independently predicts the rate of accumulation of drug resistance mutations among patients with persistent detectable viraemia [10,39]. Collectively, these studies have shown that the greatest risk for resistance is in patients with high levels of adherence and incomplete viral suppression, and this relationship is strongest for PI-based therapy. Our analyses complement previous work by additionally considering the impact of adherence on HIV RNA levels and genotypic sensitivity scores upon virological failure. The importance of viral replication on HIV-related, but not AIDSdefining morbidities has been demonstrated previously [40], and our data suggest that improved adherence on failing PI/r regimens may still contribute to partial viral control, but not with failing NNRTI treatments. Several limitations should be noted about this analysis. Despite drawing from large observational databases including almost 6800 genotypic drug resistance tests from drug exposed individuals, the final numbers of patients included in this analysis were small, but of the same order of magnitude of other studies [6,14,15]. Owing to the observational nature of this study, the treatments received were not randomized and the collection of genotypic data was not strictly enforced by study protocols. Moreover, we did not strictly analyze GRTs from first-line failures, but included additional data to increase sample size (Figure 1). We can therefore not exclude residual confounding in our analyses Furthermore, our adherence assessments are based on self-report, which is prone to recollection biases or over-reporting. Although we ask about consecutive missed doses, self-report data may not be sensitive to important adherence patterns, such as treatment interruptions. Moreover, not all individuals had genotypic resistance data collected prior to treatment initiation or at time of the first virological failure available, and some mutations could potentially have been transmitted. However, extensive longitudinal assessments of transmitted drug resistance in Switzerland have shown that the presence of drug resistance mutations before therapy initiation is still relatively rare [41]. Our observations have important clinical implications. It is self-evident that good adherence promotes viral suppression and reduces the emergence of resistance mutations. In this analysis, we also observed evidence that adherence levels influenced resistance levels of virologically failing or failed PI/r regimens. Thus, even when partial resistance has occurred, good adherence levels may slow down or even stop progression to higher resistance levels, thereby possibly extending the durability of PI/r regimens [37]. This effect may have great relevance in settings of limited antiretroviral drug availability and virological monitoring. In contrast, current WHO-recommended first-line treatments are based on NNRTI for cost reasons, which must be considered sub-optimal from the standpoint of emergence of resistance. In NNRTI therapies, resistance emergence is more likely with incomplete drug intake and cannot be limited by improved adherence later on. Thus, even temporary slips in adherence carry a substantial risk for losing the full drug combination to resistance. In summary, our data suggest that promoting adherence may still be worthwhile for individuals receiving virologically failing PI/r regimens for damage control, but failing NNRTI regimens should be switched immediately. Table S1. Development of new mutation by adherence level and drug class for individuals who either started their first antiretroviral therapy ever or who started a new class of ART and who had a prior on-treatment genotypic test result available (PI/r: n= 124; NNRTI: n= 111). (DOCX)
v3-fos-license
2015-09-18T23:22:04.000Z
2008-07-03T00:00:00.000
1972169
{ "extfieldsofstudy": [ "Biology", "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-9-301", "pdf_hash": "6450b0f36a49213b616571d4e97aaa4949f23706", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41745", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "sha1": "6450b0f36a49213b616571d4e97aaa4949f23706", "year": 2008 }
pes2o/s2orc
Deducing topology of protein-protein interaction networks from experimentally measured sub-networks Background Protein-protein interaction networks are commonly sampled using yeast two hybrid approaches. However, whether topological information reaped from these experimentally-measured sub-networks can be extrapolated to complete protein-protein interaction networks is unclear. Results By analyzing various experimental protein-protein interaction datasets, we found that they are not random samples of the parent networks. Based on the experimental bait-prey behaviors, our computer simulations show that these non-random sampling features may affect the topological information. We tested the hypothesis that a core sub-network exists within the experimentally sampled network that better maintains the topological characteristics of the parent protein-protein interaction network. We developed a method to filter the experimentally sampled network to result in a core sub-network that more accurately reflects the topology of the parent network. These findings have fundamental implications for large-scale protein interaction studies and for our understanding of the behavior of cellular networks. Conclusion The topological information from experimental measured networks network as is may not be the correct source for topological information about the parent protein-protein interaction network. We define a core sub-network that more accurately reflects the topology of the parent network. Background Biological systems are characterized by extremely complex interacting networks of nucleotides, proteins, metabolites and other molecules. It has become increasingly clear that to understand the function of a cell, one must understand the function of these networks. Because the topological characteristics of a network are believed to determine basic properties of its function [1][2][3][4], a primary goal in analyzing biological networksis to determine how the interacting elements (nodes) are connected toeach other (edges or links). The commonly used large-scaleexperimental approaches (yeast two hybrid and affinity pulldown combined with mass spectrometry) for mapping protein-protein interaction networks are extremely useful study, we demonstrate that these experimental samples do not constitute random samples, likely due to the aforementioned experimental considerations. This observation highlights that the experimentally-measured sub-networks may not be the correct source for topological information about the parent protein-protein interaction network, raising the distinct possibility that previous analyses of biological networks [3,4,[10][11][12][13][16][17][18][19][20][21][22] make inappropriate conclusions about topology. Although we conclude in this study that the current experiment datasets cannot be used directly for deducing topological information of the original network, we hypothesized that there is a core sub-network (CSN) within the experimentally sampled network that can better retain the topological information of the original protein-protein interaction network. Properties of experimentally-measured protein-protein interaction networks Despite the insights obtained by Stumpf and colleagues [7,8] regarding degree distribution and our numerical analyses of network motifs in randomly sampled networks (Additional file 1 Fig.S1), one is still faced with the problem that experimental sampling may not be random due to one or more of the following reasons: (i) some proteins are used as either bait or as prey, but not both; (ii) experimental results often contain data from different laboratories, species, techniques, etc.; and (iii) even if all proteins under analysis are used as both baits and preys (e.g., large scale yeast two-hybrid approaches), the relative ability of a protein to "behave as a bait" may not be equivalent to (and sometimes is completely different from) its ability to "behave as a prey" due to a variety of reasons. For example, the yeast protein-protein interaction network by Ito et al [23], all 6,000 proteins were used both as baits and preys, but in the resultant network, many proteins exhibited a preferential capacity to act as either a bait or a prey, while some do both. Figure 1a shows five example proteins from this dataset: JSN1 linked to 285 preys when it was used as a bait, but linked to no baits when it was used as a prey; in contrast, GTT1 linked to 21 baits when it was used as a prey but no preys as a bait; on the other hand, proteins SRB4, STD1, and APG17 act similarly as bait and prey. On the basis of this observation, one could envision three basic types of protein functions in the experimental setting ( Fig. 1b): pure bait (blue dot in Fig. 1b), pure prey (green dot in Fig. 1b), and both bait and prey (red dot in Fig. 1b, abbreviated as BP in this paper). These protein types can combine to form a network such as that shown in Fig. 1c. The same features exist in all other protein-protein interaction networks we analyzed, i.e., some proteins can link to a number of other proteins when used as either bait or prey, but most proteins "link better" as either a prey or bait. Figure 1d shows the percentage of the three types of proteins in several experimental datasets. Here we first defined the sub-network composed of the proteins which have both bait and prey functions, and the links among these proteins (red dot and links in Fig. 1c), as a "core sub-network" (CSN). Although the proteins can act as both bait and prey, some of them are still very biased towards one behavior or the other, resulting in very asymmetrical bait and prey behaviors of the proteins. The pure baits and pure preys are the extreme cases of this asymmetrical bait and prey behavior. We first exclude these extreme proteins and develop later a quantitative method to further refine the CSN. Ideally, if the interactions (in this study, we count A-B as one link, but A → B with A as bait and B as prey and B → A with B as bait and A as prey, as two interactions) between the proteins were completely sampled, there would no pure baits or pure preys. One can attribute the occurrence of the asymmetrical properties to the limitations of experimental systems or to the proteins being artificially sorted by the way the experiments were carried out. However, the asymmetrical bait and prey properties can also occur with random sampling if the sampling of the interactions is incomplete. To exclude that the measured network is indeed a randomly sampled sub-network of the original network, we did further analyses of the experimental datasets. Firstly, if the experimental sampling were indeed random, then the number of observed "pure bait" and "pure prey" proteins following an incomplete sampling should be approximately equal; in fact, however, these numbers are quite different in the experimental datasets (Fig. 1d). Secondly, if the sampling is done ran-Properties of measured protein-protein interaction networks Figure 1 Properties of measured protein-protein interaction networks. a. Table showing example proteins with their respective links when tagged as baits or preys [23]. m is the total number of preys linked to the given protein when it is a bait; n is the total number of baits linked to the given protein when it is a prey. b. Three experimental behaviors of a protein in an interaction experiment. The blue protein acts only to detect other proteins (a 'pure bait') but itself is not a prey; the green protein acts only to be detected by other proteins (a 'pure prey'), itself not functioning as a bait; and the red protein acts as both bait and prey. c. Schematic plot of network structure of an experimentally measured network. d. Pie chart for the three types of proteins, color-coded as in Fig. 1b, for the experimental protein-protein interaction data from yeast [23], Drosophila [20], and human [12] Yeast Drosophila Human domly with incomplete sampling of interactions, the chance of experimentally detecting a protein that links to many other proteins as a bait, but to none as a prey, should be very low. This is supported by the results shown in Fig. 2, in which we calculated the ratios of the proteins which link to 10 or more proteins when used as baits but none as preys, to either the total proteins of the network (magenta) or the total proteins who link to more than 10 proteins as bait no matter how many proteins are linked to it when acting as prey (blue). In the real datasets (Fig. 2a), the ratios are very high, while they are much lower in true random sampling simulations (Fig. 2b). We calculated these ratios for simulated Erdös-Rényi random, exponential, power-law, and truncated power-law networks, and they are all in the same order of magnitude as the results for the truncated power-law network shown in Fig. 2b. The high chance (Fig. 2a) that a protein links to many proteins as a bait, but to none as a prey, indicates that the proteins were sorted into different categories (pure bait, pure prey, both bait and prey) by the experiment. The results in Fig. 1d and Fig. 2 show that the bait and prey behaviors in experimental datasets differ substantially from a true random sampling; in other words, experimental sampling is not random. This supports the idea that bait/prey preference is an artifact of the experimental limitations and/or sampling methods, as previously suggested by Aloy and colleagues [24], and Maslov and Sneppen [25,26]. Therefore, based on the available theory on random sampling [7,8], one cannot extrapolate the topological information from the experimentally measured sub-networks to the entire network. Evidence supporting that experimental sampling of protein interaction networks is not random Figure 2 Evidence supporting that experimental sampling of protein interaction networks is not random. The ratio is defined as the proteins which link to 10 or more proteins when used as baits but none as preys versus either the total proteins of the network (magenta) or the total proteins who link to more than 10 proteins as bait no matter how many proteins linked to when used as preys (blue). a. Experimental datasets of yeast, Drosophila, and human. b. Truncated power-law network for different interaction retention rates. The sampling was done as follows. We first generated a large network and random sampled the proteins and interactions in a way that matches the size and number of links in the Drosophila dataset. Note that to maintain the sampled network at the size of the Drosophila dataset for different retention rates of interactions, one needs to start with networks of different original sizes. 5000 random samplings were carried out for each retention rate. Bars represent the average ratio and triangles are the maximum of the 5000 samplings. Effects of experimental sampling on network topology To show how the experimental sampling affects the topological information, we first studied effects of the ratio of the three types of nodes in the sampled network on the degree distribution and motif structure. We generated three theoretical networks (15,000 nodes each) with different topologies (Erdös-Rényi random distribution with an average connectivity equals 40, exponential distribution p(k) ∝ e -0.025k , and scale-free distribution p(k) ∝ k -1.4 ) and used the Drosophila protein-protein interaction (DPPI) network by Giot et al [20] as if it were a theoretical network without the original bait and prey information. To mimic the experimental sampling, we randomly selected 6000 nodes from the 15,000-node parent networks (for the DPPI network, 5980 proteins were randomly sampled from the original 7049 proteins) as the experimental libraries, and randomly assigned proteins (independent of degree/link number) in the libraries to be pure baits, pure preys, or BPs (proteins that can act as both bait and prey), with certain probabilities. Different ratios between these three types were thus obtained. We then applied the following rules to the interactions: (i) any interaction originating from a pure prey or terminating on a pure bait is forbidden (see Additional file 1 Fig. 2); (ii) all other interactions are detectable according to a probability q (In Fig 3, we focused on the effects of the three types of proteins but not the random sampling of the interactions, and thus we chose q = 1); and (iii) that a link between protein A and protein B exists in the measured networks when at least one of interactions A → B and B → A is detected. For comparison, we also performed a true random sampling of the original networks using the same number of nodes as the simulated experimental networks. Note that in the resultant network, one observes new ratios between the pure preys, the pure baits, and the BPs, which are different from the prior assigned rations. This is because of incomplete sampling, i.e., some of the prior assigned BPs become either pure baits, pure preys, or isolated nodes (which are not detected) in the resultant network. In this study, when we refer to a protein as a pure bait, a pure prey, or a BP, we refer to the observed behavior of the protein, not the prior assigned property. Figure 3a shows the degree distributions of the four types of networks. For Erdös-Rényi random and exponential networks, the degree distribution of the simulated experimental network (symbols) becomes increasingly different from the corresponding random sample network (lines) as the proportion of pure baits or preys increases. For the power-law network, the degree distribution is unchanged. The DPPI network exhibits a truncated power-law distribution, and therefore minor effects are observed for small connectivity since it is dominated by a power-law component, but larger effects of the differences in sampling man-ifest for larger connectivity due to the exponential tail of the degree distribution. The sub-network within the measured network that contains only BPs-which is a random sample of the library and therefore a random sample of the full network-may maintain the distribution characteristics of the full network. However, all links between two pure baits and between two pure preys are missing in the measurement. As such, the contribution of the pure baits and pure preys are biased and may change the characteristics of the degree distribution. An extreme example of this phenomenon can be observed with a random degree distribution with protein ratio of 2:1:7 (BP: pure bait: pure prey) in which the observed degree distribution of the sub-network displays two peaks with the smaller one contributed by the pure preys alone. We also counted the sub-graphs of the networks as performed in previous studies [27][28][29]. Theoretically, a randomly sampled sub-network retaining all links (q = 1) should maintain the ratios between different types of motifs, based on the following argument: a given fournode motif (for example) in the parent network remains intact in the sampled sub-network if and only if all 4 nodes are in the sub-network. If the sub-network is sampled by selecting nodes with a probability p, then a fournode motif survives with probability p 4 . Since all motifs have the same survival probability, the percentage of different motif types will not change in the randomly sampled sub-network. On the other hand, in the simulated experimental network, the three types (BP, pure bait, pure prey) may change the survival probability, i.e. the probability that the link is maintained in the sample. Effects of experimental sampling on the network degree distributions and motifs. a. Degree distributions for different theoretical networks. The libraries of 6,000 proteins were first randomly sampled from the original networks of 15,000 proteins. For the DPPI network, 5980 proteins were randomly sampled from the original 7049 proteins. To simulate the effects of experimental sampling on degree distribution (symbol), different ratios of red:blue:green (red: bait and prey, blue: pure bait, and green: pure prey as defined in Fig. 2) were then assigned in the sampled network. These ratios include red:blue:green, 1:0:0 (cyan); red:blue:green, 1:1:1 (magenta); and red:blue:green, 2:1:7 (black). For comparison, the degree distribution (line) of a randomly sampled network of the same size for each ratio from the parent network is also shown. b. The percentage of different four-node motifs for the library (black), the sampled network (magenta) of red:blue:green of 2:1:7, and the CSN (red) from the sampled network, for the same four types of networks shown in a. c. Comparison of degree distributions between sub-networks (symbols) and randomly sampled networks with the same size (lines). The library (black) is the same in a; the sampled network (magenta) has the ratio red:blue:green of 2:1:7; and the CSN (red) is from the sampled network. work and therefore retains the topological information of the full network. However, the simulated experimental network will change the topological information: in Erdös-Rényi random and exponential networks, the change includes both degree distribution and motif distribution; for power law and DPPI networks, the change involves the motif distribution. Filtering core sub-network within an experimental dataset Based on our analysis above, it is not surprising that the bait/prey preference affects the network topology so that it cannot be used to predict the topology of the parent network. But it is also not non-intuitive that the core sub-network (CSN) which is composed of only BPs (the red dots and lines in Fig. 1c) may better reflect the topological information of the parent network since the proteins in that network are somehow less biased or better represented. It is obvious that in our computer simulated networks (Fig. 3), the CSN is a true random sample of the full network; therefore, the degree distribution and motif structure of this random sample agree very well with the original network. However, in the experimental datasets, even in the CSN as defined above (the red dots and lines in Fig. 1c), most of the proteins are not equally effective as baits and as preys, but rather, exhibit a bias behavior as either bait or prey. This feature exists in all protein-protein interaction networks we analyzed. For example, protein SRB4 in the yeast dataset (Fig. 1a) is very effective when used as a bait, but much less so as a prey. Specifically, it linked to 95 (we denote this number as m) preys when it was used as a bait. Among the 95 preys, 23 (we denote this number as m 1 ) proteins were also labeled as baits in the dataset. This indicates that if SRB4 is also effective as a prey, it should (theoretically) be linked to at least these 23 proteins when it was a prey. However, it was only linked to 3 (we denote this number as n) baits (TAF17, YNR024W, and RIF2), 2 of which (we denote this number as n 1 ) themselves behave as preys. Unfortunately, none of the 3 proteins that SRB4 linked to when it was a prey belonged to the list of 23 proteins that should have been able to link with SRB4. If SRB4 was equally effective as both bait and prey, it would link to the same 23 baits when it is used a prey, resulting in 23 bidirectional interactions; however, none of these bidirectional links were detected in the experiment. In fact, in all the available experimentally-measured datasets [11,12,20,23], the incidence of bidirectional links is very low. For example, in the yeast network by Ito et al [23], there are only 74 bidirectional interactions out of 4,549 total interactions among 3,278 proteins. In the human network by Stelzl et al [12], 8 out of 3,269 interactions are bidirectional. In the DPPI network by Giot et al [20], the value is 266 out of 20,405. Most of these bidirectional interactions (260 out of 266) were retained in their high-confidence dataset though the total interactions were reduced to 4,780, sug-gesting that most of the detected bidirectional interactions are true links. The reason for the prevalence of this incongruent behavior of proteins in one scenario versus another (i.e. preferential actions as bait or prey) is unclear, but may result from altered protein folding, differences in post-translational modification, necessity of tertiary interactions, or other factors. According to our analysis above, exclusion of pure baits and pure preys does not eliminate the biased behavior of proteins from the CSN. To further refine this network, we first define two quantities-the bait score and prey scoreto quantitatively characterize the experimental behavior of individual proteins. These two quantities are empirically defined as: bait score = m/n 1 , prey score = n/m 1 (truncated to 1 if greater than 1). The rationale for these definitions is as follows. For the hypothetical Protein X, m is the number of preys to which Protein X links when it is a bait protein, among which m 1 proteins are themselves also baits in the experiment. The number of baits to which Protein X links when it is a prey protein, is denoted by the term n. In the perfect experiment, when Protein X functions as a prey it should therefore link to at least m 1 proteins (i.e. m 1 should be equal to n). This of course is not the case in a real experiment, however, and therefore a protein's behavior as a prey is quantified by n/m 1 , i.e., the prey score. In the experimental setting, n can be larger than m 1 , and m 1 = 0 for the pure preys; therefore, once n>m 1 , we set the prey score to be the maximum 1. Similar nomenclature is used to label proteins from the prey perspective. For a given Protein X, n is the number of baits to which it links when it is a prey, among which n 1 proteins are themselves also preys in the same experiment. As with the bait score above, the experimental data does not show the idealized relationship in which all interactions are detected from both directions, and therefore the bait score is calculated as m/n 1 . Relating these two scores together in the idealized scenario for a BP protein the bait score = prey score = 1, pure baits have bait score = 1 and prey score = 0, and pure preys have bait score = 0 and prey score = 1. For the proteins in red nodes in Fig. 1c, both scores range from 0 to 1, reflecting the aforementioned point that amongst the proteins functioning as both bait and prey, there is a range over which the relative abilities of individual proteins in each of these roles is distributed. Figures 4a and 4b show the two scores for the Yeast dataset and the DPPI dataset (scores for other datasets are shown in Additional file 1 Fig.S3). We can define the core subnetwork (CSN) by filtering out the proteins with low bait and prey scores. This is done by selecting a real number s between 0 and 1 and all of the nodes whose bait and prey scores are ≥ s are members of the CSN. The proteins with both higher bait scores and higher prey scores are less biased and more likely to provide accurate topological Testing the bait and prey symmetry of the network Figure 4 Testing the bait and prey symmetry of the network. a and b. bait (blue) and prey (green) score distributions for the yeast dataset [23] and the DPPI dataset [20]. c and d. bait (square) and prey (triangle) score distributions for the networks defined by the following operation: network level one (blue) defined from the original network (red) by bait score >0 and prey score >0; network level two (cyan) defined from network level one by the same criterion with the bait and prey score recalculated for the network level one; and so forth for network level three (magenta). e and f. bait (square) and prey (triangle) score distributions for CSNs defined from the same original network with different bait and prey core thresholds. Red: original network; Blue: bait score ≥ 0.2 and prey score ≥ 0.2; Cyan: bait score ≥ 0.5 and prey score ≥ 0.5; Magenta: bait score ≥ 0.8 and prey score ≥ 0.8. The theoretical network is a truncated power-law network. information. If we filter the dataset by setting the bait score and prey score to be greater than zero, the resultant CSN looks like Fig. 1c, i.e., all the pure baits and pure preys are filtered out. This filtering step is also shown for the Drosophila network in Fig 4C. If one further redefines the bait and prey scores for the CSN, the new score distribution becomes much more symmetric (Fig. 4c). For a randomly-sampled network, the score distribution is symmetric and repeating this sampling process retains the symmetry (Fig. 4d). If we filter the dataset with different bait and prey score criteria, as the score threshold increases, so does the degree of symmetry in the sampled data ( Fig. 4e and 4f). Therefore, the CSN has symmetry similar to that of the randomly sampled networks, providing strong evidence that the CSN's behavior is more akin to that of a true random sample. We calculated the same ratios as shown in Fig. 2 for the CSN of the DPPI datasetthe ratios equal to zero-which is similar to the randomly sampled networks as in Fig. 2b. In the DPPI dataset by Giot et al [20], a confidence score was assigned to each link in the measured network on the basis of experimental data. Figure 5a displays the percentage of links in 10 bins of confidence score for the DPPI network (red line). The other lines are the confidence scores for different CSNs generated from the DPPI network using different bait and prey scores. Note that for all levels of confidence, the percentage of links was higher for CSN regardless of the bait and prey scores. This is particularly evident in higher bins of confidence, emphasizing that the CSN approach identifies (in an unsupervised manner) protein interactions that were experimentally assigned higher confidence. The average confidence score of the DPPI network is 0.328; however, the average confidence score of the CSN (for the highest bait and prey scores) increases to 0.485. Even for the high-confidence DPPI dataset [20], the average confidence score of the CSN is still higher than that of the whole sampled network (Fig. 5b), supporting the CSN method described in this paper as a reliable independent means to assess the topology of the entire network. Lastly, the ratio of pure baits to pure preys is much closer to 1 (which, as described above, is the ideal scenario for a true random sample) when the CSNs are examined as compared to the total experimentally measured network (Fig. 5c), indicating that the CSN may better approximate a random sample of the original network. In fact, this same feature exists in the other experimental datasets [12,23] we evaluated (data not shown), that is, the ratio of pure baits to pure preys approaches 1 for the CSN. When the original DPPI dataset was filtered into the highconfidence one [20], the protein number collapsed from 7048 to 4679 (66% of initial value) and the link number from 20405 to 4780 (23% of initial value). For the CSN generated with bait and prey scores ≥ 0.5 before filtering with confidence score, there were 1149 proteins with 1834 links, of which 130 links were bidirectional, and the average confidence score was 0.438. After the filtering, 702 (61%) proteins, 854 (47%) links, and 126 (97%) Confidence score of core sub-network Figure 5 Confidence score of core sub-network. a. Percentage of links in each bin of confidence score for the DPPI dataset [20] (red) and for several CSNs with distinct bait and prey scores in other colors (cyan, ≥ 0; orange, ≥ 0.25; green, ≥ 0.5; blue, ≥ 0.75; and magenta, = 1). In all cases, the CSNs have more links from higher bins of confidence, an independent estimation of the reliability of the data from the experimental setting. b. The average confidence score for the full DPPI network (upper panel) and the high confidence network (lower panel) for different CSNs defined by the same range of bait and prey score thresholds as in a (coloring same as in a). c. The percentage of the four-node motifs of the DPPI dataset (cyan) and of CSN (red) defined by bait and prey score ≥ 0.5. bidirectional links remained, and the average confidence score was 0.747. This exercise demonstrates that the links in the CSN have a much higher retention rate (47% vs. 23%) when filtered with confidence, in further agreement with the higher average confidence score of interactions in the CSN. This conclusion is further substantiated if we regenerate the CSN (with the same bait and prey scores) after filtering the DPPI network to the high confidence DPPI network on the basis of the experimental data: this new CSN has 937 (602 are identical to those in the unfiltered CSN) proteins, 902 (450 identical) links, 223 bidirectional links, and an average confidence score of 0.753, which is substantially increased in comparison to when the filtering is done after the CSN is defined from the DPPI network. Interestingly, 84% (223/266) of the bidirectional links were retained when the CSN was defined after filtering the DPPI network to the high confidence DPPI network, versus 47% (126/266) retention of bidirectional links when defined from the DPPI network prior to confidence score filtering. Thus, this CSN approach is an independent (and complementary) method to identify high confidence links more likely to harbor accurate topological information. We also compared the motif distributions of the DDPI dataset and their CSNs (Fig. 5c). The percentage of the Motif 1 is higher, while that of Motif 2 is lower, in the DPPI network as compared to those observed in the CSN, which agrees with the theoretical analysis in Fig. 3. This is also true for the other experimental datasets (Additional file 1 Fig.S4). Based on the analyses above, we hypothesize that the CSN within the experimentally sampled sub-network is a closer approximation of a random sample and thus retains the topological information of the original network better than the entire experimental sample. Theoretically, filtering the experimental datasets using our method with higher bait score and prey score thresholds, one can obtain a better CSN. However, due to the limited number of proteins in the network, higher bait and prey scores result in fewer proteins in the CSN, which may cause the CSN to be too small to faithfully retain the topological information of the parent network. What are the degree distributions of protein-protein interaction networks? A number of studies have suggested that protein-protein interaction networks are scale-free [3,4,[10][11][12][13]18], whereas other studies have contested this interpretation [19][20][21][22]. Han et al [9] showed that the scale-free nature may be due to the low sampling rate and imperfect sampling methods which can cause a sub-network from a Erdös-Rényi random network to appear scale-free. For this to happen, a key feature is the loss of the peak in the bino-mial distribution of the random network. Since the peak is located at [Nγ]~[(N-1)γ] = [<k>] ([x] is the integer part of x, N is the size of the original network, γ is the sampling rate, and <k> is average connectivity of the sampled subnetwork, see Additional file 1 text for details), when <k><2, the peak will disappear. However, the average connectivity <k> of most of the measured networks is greater than 2, even for some of the CSNs we examined (Additional file 1 Table S1), indicating that the protein-protein interaction networks may not be random networks. On the other hand, our analysis shows that if the protein-protein interaction networks are scale-free (that is, if they have a power-law distribution), the degree distributions of either a random sample, an experimental sample or the CSN all closely resemble the same power-law distribution of the original network (see Fig. 3). This may be true even though a randomly sampled sub-network of a scale-free network may not truly be scale-free in the theoretical sense, as shown by Stumpf et al [7]. In fact, most of the experimental datasets exhibit a truncated power-law distribution p(k) ∝ k -δ e -εk (see Additional file 1 Fig.S4), and for the DPPI dataset (Fig. 6a), it is well fit by p(k) ∝ k -1.2 e -0.038k as shown by Giot et al [20]. A CSN with both bait and prey scores greater than or equal to 0.5 has a degree distribution close to p(k) ∝ k -0.6 e -0.22k , which has a larger exponential component but smaller power-law component than the DPPI network. For the high-confidence dataset of the DPPI network (Fig. 6b), it can be well fit by p(k) ∝ k -1.26 e -0.27k , while the CSN defined by both bait and prey scores greater than or equal to 0.5 has a degree distribution p(k) ∝ k -0.01 e -0.75k which is almost completely exponential. To show that this effect is not due solely to the reduction in network size, we also show the degree distributions of two random subsets of the experimentally sampled network: one where the protein number is the same as that of the CSN (called random sample 1) and the other in which the link number is the same as that of the CSN (called random sample 2), both of which have degree distributions that are very different from the CSN. In other datasets we analyzed, the degree distributions of CSNs all have a smaller power-law component and a larger exponential component as compared to the original datasets (Additional file 1 Fig.S4). However, we are not able to completely rule out that the reduction in network size contributes to the enhancement of the exponential component. The two randomly sampled networks in Fig. 6a are not very different from the CSN in both the powerlaw component and the exponential component. While the networks in Fig. 6b have much stronger power-law components than the CSN, there are relatively few data points making up the degree distribution for the randomly sampled networks. Discussion The present study provides an improved method for extracting accurate topological information about real protein-protein interaction networks from experimentally-obtained sub-networks. The fundamental conclusions of this study can be summarized as follows: (i) random sampling of networks preserves topological information, regardless of the type of network analyzed; and (ii) experimental protein-protein interaction studies have well-established limitations that make their method of sampling non-random; however, (iii) definition of a CSN that contains proteins that behave experimentally as both baits and preys better approximates a random sample and therefore increases the accuracy of topological assessment of protein-protein interaction networks. We show that sampling of theoretical protein interaction networks with exponential, random or scale-free topology in a manner that takes into account experimental limitations, can (and indeed, usually does) produce a sample with scale-free topology; it is given that samples of protein interaction networks appear scale-free; from this, however, it cannot be concluded (as has been previously attempted) that protein interaction networks are scale-free. Based on our method of defining CSN from the experimental datasets, we show that the degree distribution of the original network may not be scale-free, but may in fact exhibit an exponential distribution. Protein interaction analyses have unavoidable limitations including false positive and negative identifications [30][31][32][33] and assumed binary interactions, as mentioned above. We suspect that these false positives may contribute to the observed power-law component of the protein-protein interaction networks based on the following rationale: (i) the highconfidence Drosophila network (purportedly containing fewer false positives [32]) has a stronger exponential component (also verified by Przulj and colleagues [21]) and the CSN has an even higher confidence score and stronger exponential component (Fig. 5 and Figs. S4); (ii) many proteins preferentially behave as either baits or preys but not both, suggesting an experimentally-introduced preferential attachment phenomenon (introduction of hubs by Degree distribution of the core sub-network Figure 6 Degree distribution of the core sub-network. The degree distributions of the original dataset (cyan) and the CSN (red) and two randomly sampled subnetworks for the DPPI dataset (a) and the high-confidence DPPI dataset (b) by Giot et al [20]. experimental bias) which, as shown by Barabasi and Albert [34], is a key factor for occurrence of power-law distributions; and (iii) the degree distribution of a mammalian protein-protein interaction network obtained by Ma'ayan et al [29] from the literature, which should have a much lower rate of false positives, exhibits an almost purely exponential distribution (Additional file 1 Fig. S5). Additionally, the failed detection of links between certain proteins (the green ones or blue nodes in Fig. 1c) due to the aforementioned experimental considerations may contribute to the high rate of false negatives, which may thereby also contribute to the power-law component of the distribution. Although we show evidence that the degree distribution of protein-protein interaction networks might exhibit stronger exponential component, further detailed analyses are needed to substantiate this conclusion. Determining with high confidence topological information about protein-protein interaction networks from the properties of a smaller, experimentally measured, sub-networks has been challenging [35][36][37]. However, the topologies of the networks are extremely important for their function and robustness [1][2][3][4]38,39]. Conclusion In this study, we have developed an improved method for extracting topological information for cellular proteinprotein interaction networks from experimentallyobtained datasets. As structure, or network anatomy, is a necessary precursor to understanding function, or network physiology, these findings enhance our ability to use existing experimental methods for protein-protein interaction analysis to investigate the behavior of these networks in vivo. Theoretical networks Theoretical networks were generated following the method by Bender and Canfield [43], that is, we assigned a desired number of edges for each node following the theoretical distribution, then randomly linked a pair of nodes to make an edge, and decreased the link number for both nodes by one until all edges were assigned to nodes without repetition. Random networks were generated according to the Erdös-Rényi model binomial degree distribution represented by: . Simulated experimental networks To mimic the experimental sampling, we first generated the theoretical parent networks with N nodes by the method mentioned above. Then we randomly selected M(M<N) nodes from the N-node parent network, and randomly assigned the nodes in the M-node network to be pure baits, pure preys, or both baits and preys with different probabilities independent of the number of links of the nodes. We then applied the following rules to the links of the selected nodes: 1) Any interaction starts from a pure prey or ends at a pure bait is forbidden; 2) For the allowed interactions, each has a probability q (in the simulations in Fig 3, we used q = 1) to be detected; Motif detection We detected the motifs using the software mfinder1.2 developed by U. Alon's lab [44].
v3-fos-license
2019-05-04T13:08:02.639Z
1997-12-01T00:00:00.000
143681016
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7DFE282895DD7C82565078D10645849F/S095560360005491Xa.pdf/div-class-title-community-psychiatry-in-the-raf-an-evaluative-review-div.pdf", "pdf_hash": "f5cf5fd53e106c34486494787daf7f1a6dad8583", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41747", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "48d74a1d6f832ff02889c7ed66144203be588f76", "year": 1997 }
pes2o/s2orc
Community psychiatry in the RAF: an evaluative review Supervision. It has previously been suggested that training schemes establish their own guide lines as to what constitutes good training and that trainees take a lead in auditing practice against these guidelines (Davies. 1993). Our project provided the basis for such a practice on the UMDS training scheme. Consultants and trainees are able to make significant improve ments at a local level in the absence or anticipation of more explicit guidelines from the College. Such an approach allows the trainee and consultant to agree upon the content of supervision, and allows for the fact that some consultants may have special expertise in some areas and less knowledge in others. A centrally led dictate on supervision could be hard to implement effectively and maybe unsuited to the diversity of services, trainees and consul tants. FETTER M. HERRIOT,Noarlunga Health Services, Noarlunga Centre, South Australia 5168: KAMALDEEP BHUI, Institute of Psychiatry, London Core psychiatry for tomorrow's doctors Sir: The College Working Party's report on core learning in undergraduate psychiatry (Psychia tric Bulletin, August 1997, 21. 522-524) mirrors changes already taking place in medical schools throughout the UK. In Nottingham a set of learning objectives, very similar to that proposed by the working party, was devised (Cantwell & Brewin, 1995). The list differs mainly in the realm of attitudes, an area which caused us the greatest difficulty. Unending suggestions for inclusion could be made in a discipline which students are likely to approach with a mix of attitudes based on lay misconceptions and fears. Like the working party we addressed issues about stigma, the multi-disciplinary team and the interplay between psychological and physical factors. Two other areas stood out in relation to controversies over psychiatric practice and the particular importance of the doctor-patient re lationship in psychiatry. Our list is given below. (A complete list of all learning objectives given to Nottingham psychiatry undergraduates is avail able from the authors.) Learning objectives, attitudes: (a) appreciate the inter-relationship between physical and psychological symptoms and the need to be aware of psychological factors in all medical conditions: (b) recognise the stigmatisation associated with mental illness and learning difficul ties and how this can affect patients and their families; (c) be aware of the ethical dilemmas and controversies involved in the diagnosis and management of mental disorder; (d) appreciate the function of the multi-dis ciplinary team and the role of each of its members: (e) as you progress through your attachment, acknowledge the importance of the ther apeutic relationship between doctor and patient and how the time scale for change is lengthened in psychiatry. The need to challenge perceptions of psychia try is undeniable, and the devising of objectives relatively straightforward. We have no easy answers, however, on how such attitudinal change can be brought about in new curricula. Community psychiatry in the RAF: an evaluative review Sir: Julian Hughes is to be congratulated on using an audit of his first six months' experience of providing community services in the Royal Air Force (RAF) to produce a provocative and thoughtful paper (Psychiatric Bulletin. July 1997, 21, 418-421). However, there are two omissions of pertinent fact. Most significantly, at the time of his audit the RAF was in the middle of major changes, almost halving its manpower. Many were facing redundancy. Given this very high level of social change, the finding of an unusually high referral rate for adjustment disorder is not too surprising and caution should be exercised in drawing wider conclusions on the general pattern of psychiatric morbidity in the Services. Second, the RAF's medical services were themselves in the process of major change. The future provision of medical services has been addressed in the Defence Costs Study that closed the RAFpsychiatric centre. The resulting turmoil would have disrupted the normal functioning of the Community Department. Again, it is appro priate to exercise some caution on the conclu sions to be drawn. The RAFcommunity psychiatric teams provide a primary care liaison psychiatric service. Audit has shown them to be both effective and efficient, they enjoy the strong support of the RAFmedical executive. Service medical officers make direct referrals, either to the community psychiatric nurse or to the psychiatrist. The team is closely integrated and community psychiatric nurses refer to the psychiatrist for diagnostic, manage rial or administrative decisions. Communication plays an important role in its successful func tioning and the psychiatrist has an important supervisory role. It would have been of great interest to have a view of these diagnostic, managerial, educational and supervisory aspects. Hughes correctly identifies occupational issues as of major concern to the Services. The occupa tional role ofmilitary psychiatry is the reason for its existence. The number of uniformed psychiatric personnel is determined by the requirements ofthe war role; in operations RAFpsychiatric personnel deploy to form psychiatric support teams. Peace time care is provided from within these resources. Psychiatric services are no longer available to dependants and our civilian colleagues will indeed need to become more aware of the special circum stances of the Services. Abou-Saleh, 1985: Junaid & Daly, 1991. Fifty per cent of registrars and 21% of senior house officers (SHOs) had conducted some form of research. The most notable finding was an inverse relationship between enthusiasm for research and training experience. Eighty per cent of registrars and 100% of SHOs believed research was important for a career in psychia try, while only 52% of registrars and 67% of SHOs believed it should be important. The latter correlated in inverse proportion to the length of time in training. Crisp (1990) has argued that all psychiatric trainees would benefit from research experience, in order to develop "a capacity to think sys tematically, measure comprehensively and accu rately and analyse the information". Alternatively it could be argued that formal research should not be imposed on the majority. In keeping with the Nietzschean spirit of National Health Service reforms, research could focus on a potential academic elite, improving quality at the expense of quantity. This would also remove the unne cessary burden of 'publish or be damned' from the ranks of disinterested trainees. The role of research in psychiatric training: the trainees' perspective Sir: I read the Collegiate Trainees' Committee comment on the use of logbooks in training (Psychiatric Bulletin. May 1997, 21, 278-280) with a sense of déjà vu. In the article they state that trainees should be permitted to develop skills in management or teaching as an alter native to research, given that ". . . not all trainees are interested in, or successful at, research." In 1995 I was invited to conduct a study of trainees' attitudes to research in south-west Thames, as chair of the trainees' committee. A high response rate was obtained (68%, n=122), as compared to previous studies (Hollyman & Stalking the stalkers Sir: From the 16th June 1997 the United King dom has come in line with other countries by introducing the Protection from Harassment Act 1997 to combat stalking. This allows a court to order imprisonment or a fine and allows for the rewarding of damages for "anxiety and any financial loss resulting from harassment". The behaviour of stalkers is generally trau matic for their victims and a recent study shows that a considerable percentage have been forced to change their lifestyle, move accommodation up to five times, change employment, curtail their social outings and remain in a state of siege in their homes. A preponderance of victims report deterioration in their physical and/or mental health since the onset of their ordeal which may have lasted for as long as 20 years. The stalkers may resort to violence, damage property or possessions of their victim, and
v3-fos-license
2018-12-07T07:25:33.338Z
2017-09-29T00:00:00.000
54900267
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://ejournal.ukm.my/3l/article/download/18122/6537", "pdf_hash": "e83358e42bee4dbc21c5e0a5ee57eedbcb86b691", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41748", "s2fieldsofstudy": [ "Education" ], "sha1": "e83358e42bee4dbc21c5e0a5ee57eedbcb86b691", "year": 2017 }
pes2o/s2orc
Teacher Trainees ’ Perspectives of Teaching Graphic Novels to ESL Primary Schoolers Students in the 21 century are exposed to multimodal texts, which are texts with the combinations of the modes of prints, images, gestures and movements. Graphic novel is one of the examples of a multimodal text and this genre is introduced in the Language Arts module as part of the English language subject in the new Curriculum for Primary Schools in Malaysia. Hence, it is important that teachers should first be aware of how to make the most of multimodal texts before introducing their pupils to the strategies necessary for comprehending the text. However, without proper training on how to approach the genre, the teaching of graphic novels may pose difficulties for teachers in general and especially so for teacher trainees. This paper reports the findings of a survey conducted on teacher trainees to explore the challenges they faced in teaching graphic novels to primary schoolers. Results show that although the graphics succeeded to entice the pupils into reading the text, the teacher trainee felt that the graphics did not help their pupils in understanding the storyline. The pupils’ eagerness to go through the graphics has caused them to ignore the words in the speech balloons. Consequently this has led to incomprehensible input and misinterpretation of the content. Results from these preliminary findings can be used to further investigate the strategies good readers use to read and comprehend graphic novels, so that teacher trainee would be better prepared to utilise graphic novels in their English classes. INTRODUCTION In the traditional view, literacy pedagogy has always focused on language texts, where language is the only mode of communicating and delivering information.Nonetheless, with the advancement in technology, texts are now produced on a multimodal platform.Siegal (2012) defines multimodality as the simultaneous presentation of more than one mode in a single text or event.On a similar note, Rajendra (2015) asserts that combining two or more semiotic systems in a text creates a multimodal text.These multimodal texts, such as websites, video games, school textbooks, pictures books, magazines articles, advertisements and graphic novels, involve a complex interplay of written text, visual images, graphics and design elements (Kress & Van Leeuwen 2006, Kress et. al 2001, Unsworth 2001). In a visually-oriented culture today, children and teenagers are among the major audiences that are exposed to these varieties of information sources.Picture books, for instance, are multimodal texts that have been used widely in the elementary classrooms for many years (Serafini 2011).A study by Hassett and Curwood (2009) on children's picture books explains that the multimodality of picture books takes various forms.In such books, meanings are conveyed through the use of three sign systems, namely, written language, visual images, and visual design elements (Serafini 2012).However, in the context of English as a Second Language (ESL) in particular, the use of multimodal texts is still limited, although the development is encouraging and highly commendable.This paper puts forth the argument that there is a need to take a closer look at using the multimodal texts in the primary ESL classroom.The present study investigates teacher trainees' perceptions and experiences when utilizing graphic novels, one of the examples of a multimodal text, in their primary classrooms. The graphic novel was introduced in schools via the new Curriculum Standard for Primary Schools in Malaysia, as a genre to be taught in the Language Arts module of the English language subject in primary schools in 2011.The use of graphic novels in the English classroom is seen as a change from traditional texts that are generally mono-modal in nature, to texts that are multimodal.As such, graphic novels embrace the many varied modes like words, images, colours and page layout (Serafini 2012).Although the idea of using graphic novels as a new form of literature in the primary school curricular is still fairly new to teachers as well as pupils, the unique combination of the two rich forms of mode -the linguistic and the visual mode -makes it an effective pedagogical tool.Yang (2008, p. 187) states that graphic novels "bridge the gap between media we watch and media we read".This is particularly accurate in describing the twenty-first-century society which is constantly surrounded by information delivered through a variety of visual media like the television, video games and the internet.It is the combination of these visual images and the traditional text that we normally read that emphasizes the uniqueness of graphic novels.Consequently, graphic novels can also be used to meet traditional literacy goals of text comprehension as well as multiple literacies (Brenna 2013, Lapp et. al 2012, Risko et al. 2011, Schwarz 2006). The history of graphic novels is closely related to how comics developed.Many scholars agreed that comics and graphic novels have a lot in common.Abbot (1986), for instance, describes comics as "a medium that combines written and visual art to an extent unparalleled in any other art form" (p.155).McCloud (1993) defines comics as a "juxtaposed pictorial and other images in deliberate sequence" (p.9).On the other hand, a graphic novel, according to Eisner (1985) is "an arrangement of pictures or images and words to narrate a story or dramatize an idea" (p.5).Carter (2007) identifies graphic novels as "book length sequential art narrative featuring an anthology-style collection of comic art" (p. 1).Similarly, Seelow (2010) defines a graphic novel as an "extended, self-contained comic book" (p.57).Based on all these definitions, it can be concluded that comics are similar to graphic novels in terms of the relationship between graphics and words that are presented in sequence to form a narrative.The only difference between graphic novels and comics is their length.Weiner (2002) highlights that because an entire story is bounded and published in a single release, graphic novels are relatively thicker than comics.Comics on the other hand are produced in sequels and serials and released at regular intervals.In addition, Romagnoli (2013) further added that graphic novels usually have more serious contents than comics.Regardless of the differences, the evolvement of both comics and graphic novels has elevated to become more acknowledged and recognised among readers. In the past, comics were believed to impede reading comprehension, imagination and can cause eyestrain (Dorrell, Curtis & Rampal 1995).Wright (2001) even claimed that comics promote negative values such as violence, racial stereotypes, homosexuality, rebelliousness, and illiteracy.However, times have changed as comics and graphic novels become more accepted as a valid form of literature and art.Emerging researches argue that graphic novels are assets to the teaching of literature in classroom settings.Among others, Öz and Efecioğlu (2015) reported that the inclusion of graphic novel as a text used for high school students in Ankara, Turkey, was a huge success in increasing the students' motivation by stimulating visual reading.The findings showed that graphic novels played a significant role in helping the students to understand literature elements, vocabulary and infer deeper meaning of the text.Another study that proves graphic novels have brought positive impact to the reading comprehension skills of EFL learners was conducted by Basola and Sarigulb in a Turkish University in 2012.This study however highlighted that the practice of pre-reading, while reading and post reading activities along with the use of graphic novels were the determining factors of a successful reading lesson.Earlier, Gavigan (2010) conducted a case study on four struggling male adolescent readers on how they responded to graphic novels during a graphic novel book club.The findings revealed that graphic novels improved their reading engagement as well as increased their motivation to read. Studies on the use of graphic novels in the Malaysian classrooms are still very much in its infancy since it has only been incorporated in the primary English language syllabus in 2011.A study by Faezal Muniran and Md. Ridzal Md. Yusof (2008) encourages the use of comics and graphic novels in Malaysian schools and libraries in promoting literacies.Yunus et al. (2011) reported that teacher trainees also benefited in using digital comics, particularly those which are downloaded from websites, to encourage their students to write in English.On top of that, Sabbah, Masood and Iranmanesh (2013) recommended that graphic novels to be used in primary schools based on the effective results in improving students' reading comprehension mainly for visual learners.In addition, the study, which used Felder and Solomon's (2001) learning style index to determine the students' learning styles, found that the use of graphic novels may also be beneficial for reluctant and struggling readers.Interestingly, besides English, other subjects have also benefited from the use of graphic novels.For instance, a study by Hii Sii Ching and Fong Soon Fook (2013) has revealed the positive value of multimedia-based graphic novels on students' critical thinking skills toward History learning.A more recent study focuses on the effectiveness of using comic as a learning tool in the process of teaching and learning of Science in primary schools shows a significant increase in the achievement of the pupils in the topic of Energy and thus improving their higher order thinking skills and their ability to remember the Science facts and concepts (Krishnan & Kamisah Othman 2016). Despite the commendable values testified by these studies, claims that the use of graphic novels have been successfully implemented in the new curriculum for ESL primary schools cannot be made without seeking the opinions from the teachers.Fauziah Ahmad (2008) advocates that teachers' attitudes, beliefs, perceptions and background knowledge are among the most significant factors that have been cited as affecting the implementation of an educational reform.Ambigapathy Pandian (2006) expresses his concern on the challenges Malaysian teachers of English faced in the evolving literacy phenomenon.In this case, naturally, teachers should first be aware of how to make the most of multimodal texts before introducing the pupils to the strategies necessary for comprehending such texts (Serafini 2008(Serafini , 2009)).However, without proper training on how to approach the genre, the teaching of graphic novels may pose difficulties for teachers in general and especially so for pre-service teachers.This is because pre-service teachers may face difficulties in exploring contemporary texts that they themselves might not necessarily be familiar with; hence to teach literacy practices around multimodal texts that they might not equate with reading may be a problem (Hamston 2006). A preliminary face-to-face enquiry on the teaching of graphic novels among preservice teachers at the Institute of Teacher Education Raja Melewar Campus, Seremban revealed that the teacher trainees were not ready to teach the new form of literature to the primary pupils.In addition, the pro forma for teaching Children's Literature and Language Arts course in a TESL (Teaching English as a Second Language) programme, includes no specific topic that addresses the teaching of graphic novels.It is based on these setbacks that this current research commences.It is hoped that the teacher trainees' personal perceptions of graphic novels and their experiences when teaching graphic novels during their final three months practicum will provide useful insights on how graphic novels could be used in classroom practice.This study was a preliminary study to investigate the strengths and weaknesses of teaching graphic novels that may lead to future studies on how best graphic novels should be taught in primary schools.For these purposes, the following research questions were formulated: 1. How do teacher trainees' perceive graphic novels in the teaching of Language Arts? 2. What were their experiences in teaching graphic novels to year four and five pupils? LITERATURE REVIEW GRAPHIC NOVELS AND THE LITERATURE COMPONENT The implementation of the literature component in the primary and secondary English language syllabus is seen as an effort to elevate the level of proficiency among Malaysian students.Incorporating literature in English into the English language programme was "to help students improve their language skills (especially reading) and also to experience both education and pleasure when reading literary texts" (Vethamani 2004, p. 57).The teaching of literature in the Malaysian ESL classrooms also aims to contribute to students' "…personal development and character building and widen their outlook of the world through reading about other cultures and world views" (Ganakumaran 2003, p. 39).The implementation of the Contemporary Children's Literature (CCL) in 2003 to upper primary pupils in Malaysia was intended to improve the teaching and learning of English in the ESL classroom through the use of storybooks or children's literature (Nur Haslynda 2014).This is an intensive reading programme which uses three prescribed texts per year.The texts consist of short stories and poems. In October 2010, the Ministry of Education issued a circular on the implementation of the new Standard Curriculum for Primary Schools to replace the Integrated Primary Schools Curriculum (Nor Haslynda A. Rahman 2014).The curriculum reform or its widely used Malay equivalent, 'Kurikulum Standard Sekolah Rendah' (henceforth, KSSR) involves all subjects including English.Its main concern is "to restructure and improve the current curriculum to ensure that students have the relevant knowledge, skills and values to face the challenges of the 21st century" (Ministry of Education Malaysia 2012, p. 6).Through this transformation, the English language teaching component is geared towards producing students with basic language skills to enable them to communicate effectively in a variety of contexts that are appropriate to the pupils' level development (Ministry of Education Malaysia 2010).In the KSSR curriculum, literature in English is given a bigger role by introducing Language Arts and graphic novels as one part of the prescribed texts for the pupils.The Language Arts module has been added to the English language curriculum from Year 1 to allow pupils to engage and enjoy stories, poems, songs, rhymes and plays written in English.Through fun-filled and meaningful activities, this component allows pupils to use fictional and non-fictional sources so that they will gain a rich and invaluable experience using the English language (BPK 2011).At the upper primary level, classics like The Jungle Book (for Year 4), Gulliver's Travel (for Year 5) and The Wizard of Oz (for Year 6) in the form of graphic novels are introduced as required literary texts. GRAPHIC NOVELS AND TEACHERS' PERCEPTIONS A significant amount of the research already attested to the literary and pedagogical value of graphic novels.Among others, graphic novels provide motivation for reluctant and struggling readers to read (Brozo, Moorman & Meyer 2013, Snowball 2005, Crawford 2004), improve comprehension and critical thinking skill (Sabbah et al. 2013, Maderazo 2010) and assist in the teaching of vocabulary (Basal et al 2016, Connors 2011).To date, however, little research has examined the manner in which teachers responded to these texts especially in the ESL context.Thus, this study aims to investigate the teachers' perceptions and attitudes in using graphic novels in the primary ESL classrooms. Yunus et al. ( 2011) conducted a study using digital comics with the purpose of motivating low proficiency students to improve their writing skills.This study found that 30 TESL teacher trainees responded positively towards using digital comics in teaching writing.Due to their ease of use, the teacher trainees reported that digital comics allow low achieving ESL learners to write short sentences in English.These attributes motivated the learners to write creatively which in a way transformed a strenuous writing lesson into a more enjoyable experience. Another study which looked into teacher's and students' perceptions towards reading a graphic novel was conducted by Pishol and Kaur (2015).Combining multiliteracies approach in a reading lesson, the findings showed that the use of the graphic novel and multiliteracies approach had created a more engaging, enjoyable and interesting reading lesson.In addition to the positive findings of the previous studies, graphic novels can also be used creatively in teaching vocabulary.Basal et al. (2016), for example, conducted a study on 72 first-year students from an English Language Teaching Department of a state university in Turkey.Adopting a quasi-experimental method, this four-week-long study investigated the effectiveness of vocabulary learning via graphic novel.The presentation of figurative idioms was found to be more helpful when portrayed in texts that are accompanied by illustrations than in normal traditional texts. Despite the commendable values that graphic novels have gained, there are still many educators who seem reluctant to use the genre in their classrooms.A study by Lapp et al. (2012), for example, used a survey on the teachers' attitudes towards graphic novels and how graphic novels are used in their classroom.It was found that although elementary teachers report a willingness to use graphic novels, their practice in the classroom showed otherwise.Their limited attempts to use graphic novels were due to the lack of instructional models, lack of graphic novels in the classroom and the teachers' level of familiarity with the genre.Similarly, a study conducted by Annett (2008) on six middle school, high school and college English teachers showed that although these teachers were keen to teach English using graphic novels, unfamiliarity towards the genre made them reluctant to do so.She added that teachers who wish to use graphic novels in the classroom should require "some techniques and strategies to analyse the visual aspect of the storytelling" (p.151).Consequently, lacking the skills to explore the visual design of the graphic texts was one of the main reasons for the hesitance.Acknowledging this, Connors (2011) in another study, had put forth an initial stage of a shared vocabulary for analysing images that teacher educators and pre-service teachers can draw on to evaluate and analyse visual texts such as graphic novels.He specifically focussed on the role of three concepts played in the design of visual texts which were basic shapes, perspective and left-right visual structure.Connors also demonstrated how the concepts function by applying them to his own reading of panels excerpts from a graphic novel by Joe Kelly and J.M. Ken Niimura entitled I Kill Giants. Despite the aforementioned benefits and shortcomings of using graphic novels in ESL contexts, much of the research to date focussed on teachers in secondary and tertiary levels. In relation to the newly introduced graphic novels in the Malaysian primary schools, little is known about how teachers -pre-service and in-service alike -react towards the transformation that leads to their practice in the classroom.Carless (1998) emphasises that the teachers' perceptions and attitudes will govern the kind of behaviour that will be cultivated in real classroom activities.Thus, the current study aims at gaining a deeper understanding of a group of young teachers' perceptions and experiences in teaching graphic novels in Malaysian ESL primary school classrooms.The findings of this study will hopefully provide new insights on the pedagogical aspects of utilizing graphic novels in the ESL primary level classrooms. METHODOLOGY PARTICIPANTS The participants for this research were 57 teacher trainees who were undergoing a five-year degree programme in the Teaching of English as a Second Language (TESL) at the Institute of Teacher Education in Negeri Sembilan.The programme requires the teacher trainees to experience three periods of practicum in primary schools for three different durations: the first is for a month, the second is for two months and the third is for three-months, i.e., during the last semester of their study.This research was conducted at the time when these teacher trainees had just completed all the three stages of their practicum teaching.The teacher trainees' respective schools determined which group of pupils (i.e., which year) they were supposed to teach for their final practicum.Since the inclusion of graphic novels as part of a text in the Language Arts module only affected Year Four, Year Five and Year Six, not all of the teacher trainees were given the opportunity to teach this new genre.Out of the 57 teacher trainees, only 28 had the opportunity to teach English using graphic novels to Year Four and Year Five pupils.None of the teacher trainees taught Year Six pupils since the pupils were sitting for the Ujian Penilaian Sekolah Rendah, a national level examination, at the end of the year. INSTRUMENT The data were gathered through a set of questionnaire with open ended items which were administered online.The main reason for distributing the questionnaire online was because the teacher trainees were located in different areas of Negeri Sembilan for their practicum.A questionnaire is a standard gathering instrument for a needs analysis (Griffee 2012).For this reason, it was an appropriate instrument for collecting data regarding the teacher trainees' perceptions and experiences towards teaching graphic novels. The questionnaire is divided into three sections.Section A is on the teacher trainees' reading habits.The questions aim to find out whether the teacher trainees enjoy reading and the types of reading materials that they prefer.Apart from that, their first encounter of graphic novels is also asked in this section.Section B covers the teacher trainees' perceptions towards graphic novels.Three main questions in this section are the teacher trainees' definitions of graphic novels, their text preference (traditional or graphic novel) and how the teacher trainees make sense of graphic novels.Section C consists of questions regarding the teacher trainees' experiences in teaching using graphic novels during the practicum period.This section has the most questions and it is further divided into three sub-sections.The first sub-section is to find out their prior knowledge or experience in using graphic novels in their teaching; whether the teaching of graphic novels was taught during their degree programme and which level of pupils did they teach during their practicum.The second sub-section is to seek the teacher trainees' opinion on their pupils' reactions when graphic novels were introduced in the classroom.Finally, the last sub-section covers the questions on methods of teaching, their preferences regarding the use of graphic novels and also the challenges that they face in the classroom. The analysis of the data gathered drew on grounded theory informed techniques.This was done by developing theory from the themes and concepts that emerge from the data as the researcher analyses them (Corbin 2007).In the first phase, open coding allows the researcher to form initial categories of information about the teacher trainees' perceptions and experiences in teaching graphic novels.The second phase requires the researcher to select one open coding and positions it at the center of the process being explored and then relates other categories to it.This is known as axial coding (Creswell 2008).Finally, the third phase which is the selective coding allows the researcher to come out with a theory or in this case an explanation on the teacher trainees' perceptions of graphic novels and also discover what they experienced when teaching graphic novels to primary pupils. LIMITATIONS When interpreting the results of the present study, there are some limitations that should be acknowledged.First, the data utilised in this study come from one source i.e. the online survey questionnaires.In addition, interviews would have resulted in a more nuanced analysis.Second, the sample for the study is limited to one cohort of teacher trainees from Institute of Teacher Education Raja Melewar Campus; future studies should be conducted with larger samples from various Institute of Teacher Education in Malaysia.In addition, the research should also be extended to include in-service teachers for a more comprehensive outcome.Although the results cannot be generalised, the findings from this case study can serve as a basis for further investigation into understanding the best practices in teaching graphic novels in an ESL reading classrooms. FINDINGS TEACHER TRAINEES' READING BACKGROUND Table 1 illustrates the first section of the questionnaire which focuses on getting background information regarding the teacher trainees' reading habits and reading materials preferences.In general, all 57 participants indicated that they enjoyed reading.As for the types of reading materials that they read, magazines were ranked as the highest (43%) and followed by online reading materials (21.0%).This survey also highlighted that newspapers were the least chosen with only nine participants (15.8%) who had opted for it.In terms of their first encounter with graphic novels, more than 84% of the teacher trainees had the experience of reading the genre when they were in primary and secondary schools.Conversely, seven (12.3%) of them admitted that they were only exposed to graphic novels during their degree programme at the Institute of Teacher Education.It is interesting to note that whether their early or late exposure to reading graphic novels may have some implications to their own experiences in teaching using graphic novels during the practicum periods.The questionnaire included a question regarding their preferences for either the graphic novel or traditional texts.There was a sizable number of teacher trainees (73.7%) who replied that they preferred graphic novels more than traditional texts.The illustrations were the main factor for graphic novels preference.Most of them who had chosen graphic novels over traditional texts commented that apart from being "colorful" and "attractive", the illustrations helped them to comprehend the texts better. One participant said "…the illustrations help me to know what the writers are trying to deliver clearly" [SP9] On the other hand, it is interesting to note that those who disliked reading graphic novels (19.3%) said that they found the graphic novel "confusing" and "messy".Other than that, the teacher trainees commented that in traditional novels, the plot of the story would be padded out with detailed descriptions of the setting, the characters, even the tone and emotions.Graphic novels, to this group, seemed to provide less of these kinds of information.In addition, one participant responded that instead of having illustrations laid out for readers, she prefers to have her own imagination illustrating the events that occur in a novel.Among their responses were: "I feel that the graphics limits my imaginations and in some cases it doesn't require my imaginations at all." [SP23] "Graphic novels portrayed small size images that share the space with limited dialogues." [JP12] Another group of participants with the least percentage (7.0%),gave neutral comments when asked about their choice of text.They stressed that some novels are best portrayed in words.An example that one of them mentioned was: "…books that were too difficult to be drawn or illustrated-abstract concepts, for instance."[SP5].Another participant felt that her choice of text would depend on the content.Her response was: "If I need to read serious matter such as political issue and religion, I prefer traditional text to seek information.But if I want to read for enjoyment and to release tension or to fill my leisure time, graphic novel will do." [SP36] In responding to the question on how the teacher trainees make sense of graphic novels, 46 (80.7%) of the teacher trainees felt that the combination of graphic images and speech balloons is the essence of a graphic novel.They felt that word balloons convey the story while graphic images enhance readers' understanding.Another participant stated that with both elements relying on each other to deliver meaning, the pupils are then able to continue reading.A smooth reading experience without interruptions or pauses due to incomprehension is crucial so that reading will be an enjoyable task for the pupils.A participant commented that "Both elements should complement each other, not conquer.The imbalance would cause misinterpretation to the readers."[SP17] Nine (15.8%) participants expressed that images played a paramount importance for graphic novels to make sense.One participant who stressed that "pictures speak louder than words" [JP6] went on to say that the uniqueness of graphic novels lies in the creativity of the illustrator to deliver the meaning of the story meticulously.These participants argued that the images are more than just pictures as the use of colors can convey the exact mood and emotions.One of the comment was "if the image is drawn using black and brown, the author can create a mysterious, gloomy and somewhat scary mood."[SP36] In contrast to the other two groups, two (3.5%) participants voiced out that the speech balloons have the utmost role in graphic novels.They felt for any reading materials, the words should be the catalyst in boosting the imagination of the readers.To them, the presence of graphic images will only limit the imagination. TEACHER TRAINEES' EXPERIENCES OF TEACHING GRAPHIC NOVELS The description of Section C in the questionnaire is displayed in Table 3. Twenty-eight teacher trainees were able to share their experiences teaching graphic novels during their practicum.Seventeen (60.7%) of them responded that they have been taught on how to read and teach graphic novels in the TESL degree programme.However, another 11 (39.3%)participants claimed that they were not trained before.This contradicting result was due to the fact that graphic novels is not taught as a specific topic in the Language Arts course.Moreover, the need to expose the teacher trainees on how to read and teach graphic novels only became necessary after graphic novels was introduced as part of the literature component in the Language Arts module in 2011.Thus, the teaching modules that came together with the texts was seen as a tremendous help to all the teachers, and especially so for these teacher trainees. Seventeen (60.7%) participants had the experience of teaching Jungle Book by Carl Bowen to Year 4 pupils and 11 (39.3%)participants taught Gulliver's Travels by Jonathan Swift to Year 5 pupils.Thirteen participants (46.4%) reported that none of the pupils had read graphic novels before.Nine (32.1%) of them commented that half of the pupils had been exposed to graphic novels and six (21.4%) of the participants found that almost all of their pupils have read graphic novels.However, the teacher trainees highlighted that most of the pupils read graphic novels in Malay and most of them were exposed to reading comics which have similar features to graphic novels. The pupils' reactions were also varied according to the teacher trainees.More than 85% of the teacher trainees stated that their pupils were excited when the graphic novel was first introduced in class.Most of them were eager to read the text due to the graphic images.A participant commented that "…the children loved it, especially the boys because 'The Jungle Book' has dark colors which seems more masculine and it features many animals that captures the boys' and the girls' attention."[SP29] Another reason for the enthusiasm to read was because graphic novels were not like the ordinary texts that they usually encountered.Graphic novels had "more pictures and lesser words" [SP16], similar to comics which they were familiar with.Nonetheless, there were four (14.3%) participants who reported that their pupils were "blank and blur" [JP12] when the graphic novels were given to them.The teacher trainees felt that this was probably because the texts were new to them and not knowing how to read them properly would cause confusion to the pupils. The majority of the teacher trainees (85.7%) admitted that they did not explain the features of graphic novels to their pupils before they started teaching.Most of them felt that the act was unnecessary since the main purpose of the lesson was to understand the content of the story.Some of the participants claimed that they were not aware of the importance of teaching the features to their pupils.Another reason was because the participants felt that by introducing the features would probably "kill their excitement" [JP9] to read the graphic novels.Some of the participants also argued that their pupils had already known about the features of graphic novels since most of them were familiar with reading comics. All 28 of the teacher trainees agreed that their pupils enjoyed reading graphic novels.This was evidenced when one of the participants commented that "…the pupils were more focused and paid attention during the reading activity."[SP7].The illustrations were a major motivating factor for the pupils.Since the graphic images enable them to rely on those images to aid comprehension, the pupils would be able to "guess the content" [JP10] and continue reading without having to pause to decode the meaning.This is especially so for pupils who have low proficiency in English. Due to the pupils' positive reactions to graphic novels, 21 (75%) teacher trainees felt that they too took pleasure in teaching the genre.They commented that because the pupils took less time to read the graphic novels, more activities could be done during the lesson.This group of teacher trainees was also fortunate to be able to use the teaching modules that proposed creative activities to be done in the classroom.Nevertheless, seven (25%) teacher trainees (four of them taught Year 4 pupils and three of them taught Year 5 pupils) realised that teaching graphic novels was not an easy task.They admitted that this was probably due to the lack of knowledge on how to utilize the genre effectively.They opined that because they did not have enough pedagogical exposure to the genre, they ended up using the same strategies that lead to boredom for the pupils.Apparently, their reluctance in using graphic novels was concurrent with their claim of not getting the access to the teaching modules.Other than that, the features of graphic novels as described by a participant as having "watered down language and content, and confusing gaps in the story and panels" [SP38] made it even harder to teach. Despite the mixed responses towards the teaching of graphic novels, the teacher trainees revealed some of the challenges they faced in the classroom.The majority of them felt that although the graphic images had tremendously fascinated the pupils to read, the act of ignoring the dialogues in the speech balloons made them fail to understand the storyline correctly.The participants expressed their fear that by merely interested in looking at the pictures alone may lead to misinterpretation of the story.They commented: "It is important for pupils to look at the pictures as well as read the words given so that they are able to obtain correct information.Sometimes, looking at the pictures alone can be misleading." [SP3] "the pupils are not focused in reading but rather flipping to other pages to see the illustrations." [SP5] As a result, the pupils may not benefit from the uniqueness of graphic novels that blend dialogues and graphic images to tell the story. DISCUSSION The teacher trainees' perceptions towards graphic novels were mainly influenced by their own early exposure to reading comics when they were in school.Their association of graphic novels to comics is in accordance with Cary's (2004) definition which positions graphic novels under the umbrella of 'comics'.A study by Annamalai and Muniandy (2013) found that the reading habit among Malaysian Polytechnics students indicated a high rate of reading for entertainment, and not for academic purposes.Apparently, apart from newspapers and magazines, comics are one of the reading materials that is popular among the teenagers.In addition, it was worth noting that some of the teacher trainees who prefer traditional texts over graphic novels had only started reading graphic novels when they were much older.One of the reasons for their preference is probably due to the fact that they were not used to reading in the graphic novels layout. The teacher trainees also perceive graphic novels as loaded with pictures.Likewise, Mouly (2011) describes graphic novels under the guise of comics that aim to tell stories in pictures.Like many other researchers who support graphic novels (Brenna 2013, Lapp et al. 2012, Mouly 2011, Risko et al. 2011) 74% of the teacher trainees also felt the same.It was the power of pictures that drew the teacher trainees to choose graphic novels over traditional texts.This finding is expected as images have been an essential part of life in the twenty-first century.These images, in one way or another, define a generation's identity, popularity, and power (Baylen & D' Alba 2015).Naturally, children today, whom Connors (2011) labelled as the 'visual generation', would be more attracted to read materials with images as they live immersed in a visual culture where images surround them.Despite the inclusion of appealing images in the graphic novels, the teacher trainees still believe that a good graphic novel should have a balanced ratio between speech balloons and images.Graphic novels, according to the teacher trainees, should portray the words and the images concurrently as both contribute to making meaning.This notion supports Çakir (2015, p. 71) who claimed that "in order to create a meaningful learning atmosphere and to offer comprehensible input, word and pictures need to be presented simultaneously". A mixed response was obtained on the issue of whether the teacher trainees felt that they had been taught the basic skills in teaching graphic novels in their TESL degree programme.Being an educator at an Institute of Teacher Education, the researcher is aware that graphic novels were not specifically taught as a genre.Graphic novels are included as a component in the Language Arts syllabus alongside short stories, poetry and novels.The need to focus on graphic novels only became necessary when it was made as one of the texts for the Literature Component in primary schools in 2011.The teacher trainees' lack of knowledge on how to teach using graphic novels is evident when they admitted that the focus of their lesson was mainly to understand the content of the graphic novels.Their failure to expose the pupils to the features of graphic novels which could aid the pupils to deepen their understanding of the story highlights the teacher trainees' perception that graphic novels are another kind of traditional text. There are a number of reasons that explain the pupils' overwhelmingly positive reactions to the use of the graphic novel in the classroom.The first is the similarity between the formats of the graphic novel and those of comics, as most of the pupils have experienced reading comics in the Malay language.Secondly, the fact that reading materials that are commonly found in bookstores are now being used in formal classrooms has made the pupils excited to read in a format that they are familiar with.This notion backs the assertion made by Hines and Delinger (2011) when they say that the use of graphic novels have positively changed their students' view towards reading because they were now eager to read.Besides being an effective element in enticing the pupils to read, the images were able to assist comprehension where pupils were able to make educated guesses about the storyline based on the images when they could not make sense of the dialogues in the speech balloons.According to Krashen (1989, p. 402), "visuals accompanying texts can provide clues that shed light on the meaning of unfamiliar word or grammatical structure".Due to the positive reading influences that graphic novels have brought upon the pupils, the teacher trainees unanimously concurred that they too enjoyed their lessons using the graphic novels.The teacher trainees spent less time to explain the content of the graphic novels and were therefore able to carry out more language-based activities during the lesson. In addition to highlighting the benefits cited by teacher trainees in the use of graphic novels in the ESL classroom, this study also provides valuable insights into the challenges that they faced during their teaching practice.The study revealed that the pupils were more engrossed in looking at the pictures rather than the dialogues.Students' dependence on visuals and consequent neglect of the linguistic elements should be looked into as it may lead to incomprehensible input and worse, misinterpretation of the content.It is also often assumed by both teachers and literacy educators that young people today have "built-in" multimodal schema that allows them to interpret multimedia texts, such as websites, films and graphic novels without having to teach them (Groenke & Youngquist 2011).However, the findings of this study may show that Visual Literacy like other forms of literacies, still need to be taught formally.Although children are able to use cellular phones, iPods and other devices even before they enter school, the skills used to create and interpret images and the awareness of the vocabulary of shapes and colors must also be learned.In addition, students also need to learn how to read each mode and acquire the techniques on how to choose which information to focus on (Steeves 2015).Due to this insufficient metalanguage to comprehend visual texts that not only students but teachers also lack, it is believed that these 21st century literacy skills may have to be explicitly taught in schools (Schwartz & Rubinstein-Avila 2006). With respect to the teaching of graphic novels, various studies that promote the use of graphic novels in education have intensely emphasised the significance of a balance between visual imagery and written words so that comprehension can be attained (Brozo, Moorman & Meyer 2013, Cakir 2015, Carano & Clabough 2016, Murukami & Bryce 2009).Basal et al. (2016) also emphasised that "the quality of illustrations as well as how they are used in relation with the text are among the several criteria which determine their effectiveness in the language classroom" (p.526).McDonald (2009) who conducted a study on foreign language students using graphic novels further explained that the reading became more challenging if there was a weak relationship between the text and the illustrations "…because they cannot rely on the images to repeat the key linguistic items" (p.24).In addition Whithin (2009) claimed that graphics could also interfere in the process of comprehension.Hibbing and Rankin-Erickson (2003) supported this claim based on their findings that students failed to connect the overall idea of the story when they focus too much on the small visual details presented in the frames of graphic novels.In relation to this current study, we may conclude that failing to embrace the marriage between visuals and texts may affect comprehension and cause further damage such as causing confusion and disrupting the reading process of graphic novels. EDUCATIONAL IMPLICATIONS AND CONCLUSION In language teaching, the value of graphic novels has been acknowledged for its ability to enhance critical thinking (Hii Sii Ching & Fong Soon Fook 2013), to increase comprehension of the reading texts (Brena 2013), to address students having different learning styles (Seelow 2013), to promote active learning process when combined with multiliteracies approach (Pishol & Kaur 2015) and to enrich vocabulary instruction (Basal et al. 2016).On top of all these advantages of using graphic novels, the current study places emphasis on the importance of having formal instruction on how to utilize the multimodal text effectively. The study explores the perceptions and experiences of teacher trainees using graphic novels in the ESL primary school classrooms.The teacher trainees find that although pupils prefer graphic novels over traditional novels, the preference does not ensure better comprehension.Regardless of their exposure to the multimodal surroundings they live in, the pupils still need to be explicitly taught on how to make sense of the visuals (graphics) and the verbal (text) in a multimodal text like the graphic novels.Similar to traditional texts, visual images are also subject to interpretations.However, the skills, expertise and strategies needed to interpret the combination of images and words may be different from the traditional word dense text that pupils normally encounter (Burnmark 2008in Lapp et al. 2012).Hence, the requirement to educate students on how to read visual images is necessary (Burnmark 2002, Schwartz & Rubinstein-Avila 2006, Steeves 2015). At this point, some suggestions could be made for the teacher education programmes.The Institutes of Teacher Education Malaysia should be offering courses that are specifically designed to help teacher trainees learn about graphic novel conventions and explore multimodal teaching techniques in pre-service training.At the same time, teacher education programmes should also include professional development for in-service teachers on the pedagogical aspects of multimodal texts.Thus, teacher trainees and in-service teachers will be better equipped with appropriate pedagogical practices to face challenges to teach multimodal texts such as graphic novels in schools. Apart from teachers, the results of this study also shed light on how the pupils perceive graphic novels and the potential problems the pupils may face in understanding the text if they fail to pay attention to both the visual and verbal elements.However, to make claims based on off-line information, such as evidence that are centred on what the teacher trainees think happened while the pupils are reading graphic novels, may still not enough.In the past, similar research had to depend on read-aloud protocols when gathering data to understand how participants make sense of what they read.With the advancement of technology, experiments that collect real-time empirical data to observe and understand how a person performs language tasks are now possible and doable.One such technology is the eye tracking device that is able to empirically and accurately track and analyse eye movements as a respondent reads a text.Thus, further investigation into the pupils' behaviours when reading and understanding graphic novels using eye tracker is strongly recommended to confirm the results of this research.It is hoped that that a truer understanding of what happens during the process of reading and comprehending multimodal texts such as graphic novels will emerge.Hopefully, this would provide educators with the most effective techniques on how to teach it. The findings of this study have enhanced our understanding on the utilization of graphic novels from the teacher trainees' perceptions in their ESL primary classrooms.Although graphic novels come with great benefits, it is only with careful pedagogical planning and necessary knowledge on how to impart the powerful combination of text and image that the teachers can innovatively integrate graphic novels into their classrooms to support students' language learning processes. Table 2 categorized graphic novels as a type of comic that was "published in series".Apart from that, 'graphics' (17.5%) and 'illustrations' (15.8%) were also used interchangeably.
v3-fos-license
2018-08-14T20:56:46.736Z
2018-08-01T00:00:00.000
51956782
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/18/8/2612/pdf", "pdf_hash": "ef92a95b40cb620c364baba8cf9dae0d59c5572b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41750", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "ef92a95b40cb620c364baba8cf9dae0d59c5572b", "year": 2018 }
pes2o/s2orc
Decentralized Online Simultaneous Localization and Mapping for Multi-Agent Systems Planning tasks performed by a robotic agent require previous access to a map of the environment and the position where the agent is located. This creates a problem when the agent is placed in a new environment. To solve it, the RA must execute the task known as Simultaneous Location and Mapping (SLAM) which locates the agent in the new environment while generating the map at the same time, geometrically or topologically. One of the big problems in SLAM is the amount of memory required for the RA to store the details of the environment map. In addition, environment data capture needs a robust processing unit to handle data representation, which in turn is reflected in a bigger RA unit with higher energy use and production costs. This article presents a design for a system capable of a decentralized implementation of SLAM that is based on the use of a system comprised of wireless agents capable of storing and distributing the map as it is being generated by the RA. The proposed system was validated in an environment with a surface area of 25 m2, in which it was capable of generating the topological map online, and without relying on external units connected to the system. Introduction Navigation in an unknown environment is one of the most difficult tasks for a robotic agent (RA) because it has to sacrifice autonomy for the increased difficulty of not knowing its location, the map, or the possible routes it could take to fulfill its objective. Currently, there are techniques for location and mapping (SLAM) that enable a single agent or a group of agents to capture information from their environment and generate the map by using sensors. One of the most widely used types of sensors in SLAM is the laser telemetry sensor or LIDAR. LIDAR sensors measure distances by sending a pulse of infrared light or LASER to bounce off surrounding objects. The reflected light is then captured by a scanner and processed [1][2][3]. Other sensors used in SLAM beside LIDAR are monocular or stereo cameras that capture images from the environment. The images obtained are then processed to generate a three-dimensional map [4]. However, the use of both LIDAR sensors and cameras require a large amount of memory to process and represent the environment map, as they estimate the RA's position through predictive and optimization techniques such as Bayesian filters, particle filters, and Kalman filters with normal distributions [5][6][7]. The proposed system uses wireless sensor network technology (WSN) [8], specifically the power radiation pattern produced by the wireless nodes in the network, known as Received Signal Strength Indication (RSSI) to reduce the processing requirements to estimate the RA's location in an unknown environment. This parameter is used to triangulate the RA's position in the environment, and requires that a minimum of three nodes in the network are in range to detect the RA, to be able to estimate its position by using an external processing unit [9]. Besides serving to obtain the RA's location, the a priori placement of sensors enables the RA to generate a topological map of the terrain by characterizing each network node as a vertex in the map graph, reducing the complexity and the amount of memory required to process the RA's location and to generate the map. However, by requiring the WSN nodes to be positioned a priori and that the processing is performed by an external unit, this approach reduces the system's autonomy. It also requires that each node is placed uniformly in the map, which increases deployment complexity in areas of difficult access. This article presents a novel technique for SLAM that uses an RA to install static wireless agents (WA) in the environment, which form a multi-agent system (MAS) based on WSN principles to generate topological maps. For this purpose, we propose a differential RA which is tasked to program and install the WAs, equipped with distance sensors for obstacle detection, a digital compass to allow the RA to obtain its bearings, and encoders in all its motors to estimate the distance traveled in its displacement. To begin the location process, the RA begins by creating a graph in which each node will represent an (x, y) position in R 2 , and the connections between nodes will represent the continuity lines for the topological map. The graph will add new nodes as the RA detects new obstacles or crossings between its previous traveled routes, and the value of each new node will be determined by processing the odometry of the RA's motors and the orientation provided by its digital compass. Finally, the WA installation is performed when the RA scans its environment and is unable to detect another WA in its RSSI range. During this process, the graph is programmed in the WA, which receives constant updates while it remains within range of the RA. This allows the map to be continued by a new RA in the case the RA breaks down. The technique introduced in this article presents three major contributions. Firstly, the system disposes of the requirement of prior installation of the WA used for obtaining the RA location. Secondly, it eliminates the dependency on an external unit to process data, enabling the RA to generate the map online. The third major contribution is that the MAS system is comprised of a single AR and multiple static robotic agents that store the routes. Unlike systems with multiple robotic agents (MARS), the proposed system only requires the use of a single RA, which reduces complexity by not needing to control the behavior of multiple mobile agents [10]. In addition, in the case the RA fails, the WAs contain the generated topological map which can be loaded in a new RA, resulting in increased autonomy and fault tolerance of the system. The article is organized as follows: Section 2 makes a brief recap of WSN mobile applications to explain multi-agent SLAM techniques using WA. Section 3 explains the system design and the SLAM algorithm used. Section 4 includes the tests performed in a 2D asymmetrical environment with the RA. Finally, Section 5 presents the conclusions of this work. Related Work WSN systems are characterized by their modularity and easy maintenance. Each node is comprised of three main units: a sensor unit, a processing unit, and a data transmission unit, with the latter using, in most cases, the 802.11 wireless networking specification. WSN technology has been implemented in developing monitoring systems, Internet of things (IoT) applications, and security systems, among others [11][12][13][14]. WSN nodes' mobility can be strong or weak, depending on their being used in static or dynamic environments. If the network is deployed in a static environment, the nodes are classified as having weak mobility, and in most cases they are pre-installed in the environment to reduce processing time and data transmission latency, and also to increase battery range. If the environment is dynamic or the nodes can move autonomously, nodes are classified as having strong mobility [15]. In robotics, RAs can be classified as strongly mobile when they are capable of moving in open or closed environments to perform navigation tasks such as planning, location, and mapping. Caballero et al. [16] proposed a probabilistic framework using a network of robotic systems modeled after a WSN to estimate the location of each node in the WSN using an RA and employing particle filters with Bayesian weight classification methods where the weigh is updated by weighing the RSSI parameter. However, this approach is incapable of building a map of the environment, for which it requires the network to be pre-installed in the environment. The RSSI parameter has been widely used to obtain a location of RAs in a new environment by using triangulation techniques. Elfadil [17] designed a WSN by installing each node in a specific position, enabling the RA to estimate its position through triangulation of RSSI param, Time Difference of Arrival (TDOA), and Time of Arrival (ToA). This approach has been employed by using a symmetrical node grid to build the WSN, which allows obtaining the location of the RA in the environment and performing planning tasks, as proposed by Zhou et al. [9]. For these approaches to succeed, all nodes need to be installed and programmed with their location in the environment, and also requires that none of the nodes fail. WSNs have also been used to perform SLAM tasks, where the WAs are used as landmarks, sending their information to the RA to enable it to obtain its current location in the environment, while the RA simultaneously begins building the map. The information processed by the RA is then shared with the nodes, which help reduce the cost of building the map by doing it cooperatively. Wong et al. [18] showed how to build the map using a WSN, applying the Joint Probabilistic Data Association technique (JPDA) to reduce uncertainty errors in data processing. This technique assesses the surveyed information in three moments, past, present, and future. However, it requires a lot of computing power to build the map. Wijesoma et al. [19] proposed the use of a Maximum Data Association (MDA) value to reduce the computational level of processing, allowing to build maps with mobile obstacles. Several techniques have been proposed to eliminate the dependency on having WAs being installed a priori in the terrain. Ollero et al. [20] proposed the use of an Unmanned Aerial Vehicle (UAV) to position sensors in an environment with no communication infrastructure, to monitor disaster areas or places of difficult access. However, this work does not employ techniques to build maps or locate the agents of the system. Tuna et al. [21,22] proposed the use of MARS systems to deploy WAs using the role-based exploration approach (explorers and transmitters) [23]. The RAs in the role of transmitters send the information to the WA nodes deployed in the terrain, and the explorer RA nodes send information to the central unit. The role of the explorers is to evaluate the RSSI pattern to deploy the WAs to survey the environment for the search of people in disaster situations. To perform location and mapping tasks, explorer agents sense the environment and build an individual map, which is shared with other agents through the central unit and performing SLAM tasks cooperatively (CSLAM) [24]. However, this technique reduces system autonomy by depending on a central unit to organize the RAs and to generate the maps. The present article contributes by introducing a new technique to deploy WAs in the environment to build topological maps in a decentralized way, by not depending on a central processing unit. To achieve this, it employs a single robotic agent to generate the map and locate the WAs. In the case the RA fails, the WAs still have the already obtained information with the RA location and map, allowing to share this information with a new RA introduced in the system or with external monitoring units. Agents Description The multi-agent system is comprised of a single RA and multiple WAs. The RA is in charge of generating an online map of its environment while marking critical points to deploy the WAs. During the WA deployment process, the RA is in charge of programming each node with the map and its estimated location in the environment. After the deployment process, a WA can accept connections with external agents (EA), which can be monitor agents (MA) to control the generation of the map, or planning agents (PA) to generate routes for new RA's. This article uses an RA that contains five distance sensors for obstacle detection in the environment, enabling it to evade obstacles and select possible routes simultaneously. The RA also contains a digital compass that enables the RA to obtain its bearings, two motors with encoders to estimate the distance covered, a processing unit to control the movement and generate the map, and a wireless unit for communications and programming of the WAs. The location system uses the digital compass and the motors odometry as its core. The digital compass locates the RA's bearings to increment its position in the X or Y axes. Four cardinal positions are established to increment the traveled distance, in accordance to the value measured by the sensor, where 0-degrees is west, 90-degrees is north, 180-degrees is east, and 270-degrees is south. Each of the cardinal positions is stored in the RA and WAs memory by using an integer identifier, as shown in Figure 1. The traveled distance is estimated through the motor encoders. However, the measurements obtained by the encoders can present errors derived from the varying sizes of wheel rims and the discrete resolution of readings, with these measurement errors being classified as systematic errors. For error-reduction purposes, the UMBmark model is implemented in the RA [25], based on the cinematic model described by the translation and rotation matrix of Equation (1). where: Sensors Placement and Exchange of Information The RA's wireless unit is in charge of communications with the WAs. During the communications process, it validates if there are any new nodes to add to the units, with the goal of sharing the map with all the WAs in the network. It also uses RSSI to validate if there are any WAs in range. If there are no WAs in range, the RA programs a new node. Node programming consists of assigning it an SSID determined by the RA's name and a consecutive installation number. The information is then stored in the node in a dynamic array to represent the map. This array contains four positions, as shown in Table 1. The first position indicates the node number in the topological map. Positions 2 and 3 in the array indicate the node's position in the environment. The last position in the array contains the node to which it will be connected. Array-History WA ID node x pos y pos LastID node For the RA to be able to program a node, it needs to create a history record in its memory. The history is stored as a dynamic array in which the first four positions are the same as the ones in the WA nodes' array, and adding a position to store the routes not traveled in the nodes (see Table 2). This position stores a string value of integer numbers to identify the pending cardinal position, separated by a period. Table 2. Dynamic array for the robotic agent. Array-History RA ID node x pos y pos LastID node Pending paths The RA updates its history according to the values received from sensors. If the forward sensors detect an obstacle that may cause a collision, or if the lateral sensors detect a space through which the RA may be able to navigate, these conditions cause the history record to be immediately updated. Figure 3a shows the RA in the initial position. At that moment, the first identifier is created with the initial conditions, which are the location (0, 0) in R 2 . It also detects the pending locations in location 1, which implies it can move to the north, relative to the value obtained by its digital compass. The agent validates the history to detect the possible routes, updates the history and moves. The RA keeps moving until the distance sensors detect a collision (see Figure 3b). This causes the history to be updated again with the location value processed through odometry and adds positions 0 and 2 as possible routes to the array. The agent reviews its history again, selects one of the two possible locations randomly, and executes a movement (Figure 3c), removing it from the list of pending locations (Table 3). This process is repeated until the RA detects a frontal collision and it has no pending routes in the RA history. Algorithm 1 shows the model of history update for the RA. goto evaluate. Communications between agents are decentralized, where the WA can perform the roles of Access Point or Client, while the RA performs the role of Client. This information exchange uses the multi-agent model of Xiong et al. [26], with the exception that the WAs will always be in communications range and they are in charge of sharing and generating the map working online with all the nodes in the network, while the RA only programs the WA with the highest RSSI value, reducing packet loss errors and the need to be continuously transmitting data to a central unit, which in turn increases battery autonomy for agents in the system. When an RA enters a new environment, or a WA is deployed, the WA configures itself in client mode to search for other WAs within range. If it finds another WA, it begins an information exchange process, where the agent with the most stored information is the sender. In the case the information exchange takes place between an RA and WA, the RA stores the SSID of the WA transmitting the information, along with the number of nodes transmitted. If the client unit does not connect to another unit within a 5 s period, it automatically changes its configuration from Client to Access Point. Figure 4 shows two WAs that are about to initiate the information exchange process. The unit WA_R1_2 is initially set up to work in client mode and does not contain any stored SSIDs. The unit WA_R1_1 is in access point mode. The memory array of both units is represented in Table 4, with a total of six nodes in the unit WA_R1_2 and a total of four nodes in the unit WA_R1_1. 1 0 1 1 2 1 3 2 2 1 3 2 3 2 3 3 3 2 3 The unit WA_R1_2 connects to unit WA_R1_1 and initiates the information exchange process, detecting that two more nodes are stored in the unit WA_R1_2 as compared to unit WA_R1_1. These extra nodes are transmitted and stored along with the SSID WA_R1_2. Table 5 shows the arrays contained in both units after the information exchange. To start the exchange of information, the processing unit is responsible for recognizing if a node is in the distance for read or write. If there are multiple nodes in the search range, the processing unit selects the closest node, while the RA checks if the node has routes for navigation. To fulfill the conditions, if the ID of the node is different from the last ID read, the RA takes the first path of the array and sends it to the position of navigated routes inside the array, thereby updating in the RA the identifier of the last node read. Otherwise, if the ID of the node is equal to the last one stored, then the reading of the node is ignored. Finally, if the identifier is different but the array has no pending paths, then the RA takes the path opposite to the position that the node has in its array and stores the ID of the wireless node. This process is shown in the Algorithm 2. Table 5. Arrays from the WA in Figure 4 after the exchange of information. The exchange of information between a WA and a EA can be achieved as an exchange of information between two WAs, using Algorithm 2. Validation and Analysis Prior to the validation process, we need to model the use of the RSSI pattern. For this purpose, we assessed the behavior of the wireless module ESP8266, which is capable of being configured as client or access point. Its behavior was evaluated by sending 100 packets of 1024 bytes each, increasing the transmission distance until packet loss occurs. In the measurements obtained, we observed that packet loss occurs when the RSSI value is below −64 dB (see Figure 5a) and packets have a latency lower than 1.88 ms when the RSSI is below −50 dB (see Figure 5b). By measuring distance and evaluating the RSSI, we obtained a working range within 5 m (see Figure 5c). The validation for the proposed method for the online localization and mapping of unknown environments was performed through three experiments. The first experiment used the proposed method showing the capability of generating the map in a decentralized manner without the need for a priori WA deployment in the environment. The second experiment used a central unit for map generation; the time used to complete the map was then compared to the results of the first experiment. The third experiment was devised to assess the fault-tolerance capacity of the proposed system, compared to a centralized system. The three experiments were modeled in an environment with a surface area of 25 m × 25 m , as shown in Figure 6, in the V-Rep simulator, using a differential robotic agent with a constant velocity of 15 m/s . The wireless modules in the RA and WA agents in the system were modeled using an omnidirectional radiation pattern of 5 m, without taking into account signal loss due to obstacles. The first experiment was able to build the topological map (Figure 7a) in 22 min and 20 s. During the SLAM process, the RA deployed a total of nine WAs to cover the full environment (Figure 7b). Table 6 shows the arrays from the agents after the test; the x pos and y pos values have an offset of −1.5 m, which correspond to the initial position on the environment. Table 6. Arrays from Experiment 1 after the exchange of information. Array-RA_1 Array The second experiment used a total of 16 preprogrammed nodes (Figure 8), deployed a priori in the environment, and reduced the time to generate the map to 16 min and 43 s, by performing location and mapping with the help of WSN nodes [26] and its central unit handling data processing. As shown in Table 7, the centralized unit has a better performance in timing, but it also increases the quantity of WAs in the environment. The third experiment introduced a communication error and a failure of the RA's motors 7 min after the start of the experiment (Figure 9). When the WAs were unable to detect the RA in the system, they activated an alert message which could be received by an external unit or another RA. In this case, a second RA entered the system and updated its history according to the values stored in the existing nodes in the network (Tables 8 and 9). The new RA was able to build the topological map in 18 min and 21 s, with a total of 25 min and 21 s. For modeling a failure in the centralized system, an error was introduced into the central communications unit, which impeded the system to complete the SLAM task. Table 8. Arrays transferred to the new RA from Experiment 3 after the injected failure. Array-RA_2 Array-WA_R1_1 → Array-WA_R1_4 The aforementioned experiments show the autonomous capabilities of the proposed system, highlighting the following items, which are a foundation to create an autonomous MAS [27,28]: • Modularity: The WAs that compose the system do not need to be installed a priori in the environment for their functionality; they are installed by an RA during its navigation on the environment and can modify the information of the system network at any time. In addition, if an agent of the system fails, it can be replaced by another agent that presents the same characteristics, as was shown in the third experiment performed. • Decentralization: This system does not require an external or central unit to perform control, location or mapping tasks. Even if the agents can interact between them to create the topological map and locate the RA in the environment, they do not need permanent communication. • Distributed processes: Location and mapping tasks do not depend on any single unit, as information processing is performed by the RA and navigation data are stored and shared by the WA of the network. Conclusions This article presents a novel and decentralized multi-agent system capable of performing SLAM tasks by deploying wireless nodes in the environment through the use of an autonomous RA, avoiding the need for a central unit and the installation a priori of communication infrastructure. Three different experiments were performed in an unknown environment, showing that the proposed system is capable of generating a topological map online, by deploying static wireless agents and supported by the encoding of cardinal reference points and processing the odometry of the RA's motors to establish its location, without requiring an external unit or the a priori deployment of network nodes. The MAS decentralized system enables each WA to store an updated map with its own location stored in all the wireless nodes, by distributing the information across the whole network. This information-sharing process also makes the system capable of detecting failures such as an RA malfunction, or if any single WA fails, the RA is capable of deploying a new node in the area where it failed. As future work, this system will be implemented to assess its behavior in rescue operations in emergency situations, as it is flexible and does not rely on any previously installed network to perform SLAM tasks, and can use other external agents to reduce the time to analyze the environment.
v3-fos-license
2017-09-19T04:19:52.397Z
2017-09-11T00:00:00.000
2175458
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/s12966-017-0572-1", "pdf_hash": "59a44703a2a6da4794ffc5775a8016ead51d052e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41752", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "59a44703a2a6da4794ffc5775a8016ead51d052e", "year": 2017 }
pes2o/s2orc
Food parenting practices for 5 to 12 year old children: a concept map analysis of parenting and nutrition experts input Background Parents are an important influence on children’s dietary intake and eating behaviors. However, the lack of a conceptual framework and inconsistent assessment of food parenting practices limits our understanding of which food parenting practices are most influential on children. The aim of this study was to develop a food parenting practice conceptual framework using systematic approaches of literature reviews and expert input. Method A previously completed systematic review of food parenting practice instruments and a qualitative study of parents informed the development of a food parenting practice item bank consisting of 3632 food parenting practice items. The original item bank was further reduced to 110 key food parenting concepts using binning and winnowing techniques. A panel of 32 experts in parenting and nutrition were invited to sort the food parenting practice concepts into categories that reflected their perceptions of a food parenting practice conceptual framework. Multi-dimensional scaling produced a point map of the sorted concepts and hierarchical cluster analysis identified potential solutions. Subjective modifications were used to identify two potential solutions, with additional feedback from the expert panel requested. Results The experts came from 8 countries and 25 participated in the sorting and 23 provided additional feedback. A parsimonious and a comprehensive concept map were developed based on the clustering of the food parenting practice constructs. The parsimonious concept map contained 7 constructs, while the comprehensive concept map contained 17 constructs and was informed by a previously published content map for food parenting practices. Most of the experts (52%) preferred the comprehensive concept map, while 35% preferred to present both solutions. Conclusion The comprehensive food parenting practice conceptual map will provide the basis for developing a calibrated Item Response Modeling (IRM) item bank that can be used with computerized adaptive testing. Such an item bank will allow for more consistency in measuring food parenting practices across studies to better assess the impact of food parenting practices on child outcomes and the effect of interventions that target parents as agents of change. Electronic supplementary material The online version of this article (doi:10.1186/s12966-017-0572-1) contains supplementary material, which is available to authorized users. Background Most children's eating patterns and behaviors are shaped by family influences and ultimately can have an important impact on their weight status [1][2][3]. Research designed to better understand how parents influence their children's eating has grown over the past two decades and has resulted in over 75 published articles related to the development of unique food parenting instruments [4]. Most of this work has focused on food parenting practices, or the specific goal-directed parent actions designed to influence children's eating behaviors or dietary intake [5]. With this growing number of available instruments, there is little consensus on how to measure food parenting practices, including which instrument to use and how food parenting constructs relate to or correlate with each other. This significantly limits our ability to evaluate the relationships between various food parenting constructs and children's intake or weight status; or compare findings across studies [6,7]. Proposed ways to advance or improve the measurement of food parenting practices on children's eating behaviors and dietary intake include using direct or video observational methods [8] or employing digital technologies or simulations [6]. However, many large descriptive cross-sectional or prospective studies, or interventions will not be able to utilize such assessments due to the associated costs or burden on participants. Enhancing the ways in which behavioral and public health scientists can reliably and validly assess food parenting practices in a standard way via self-report is vital to advancing the field. One method for improving and standardizing the measurement of latent constructs measured by self-report is Item Response Modeling (IRM) of an item bank, supplemented with computerized adaptive testing [6,9,10]. In this approach, a bank of items that assesses the latent construct is developed and calibrated by IRM analysis. Computer adaptive testing of the calibrated item bank allows researchers to select all or a subset of the calibrated items to use, while maintaining the ability to compare the resulting score for the latent construct across studies. For a complex idea with multiple constructs, such as those that correspond to food parenting practices, a conceptual framework is needed to inform how the food parenting practice constructs are operationalized. While a content map for food parenting practices has recently been proposed [11], there is no tested consensus for how specific food parenting practice concepts or corresponding items fit within each construct of the proposed framework. To inform this process, this study aimed to develop a food parenting practice conceptual framework for parents with children 5-12 years old based on an existing systematically derived item bank of food parenting practices [4] using i) an online card sort task conducted by an international sample of experts of food parenting and feeding, followed by ii) a concept mapping analysis of the resulting grouping of food parenting concepts into constructs and a larger framework. The long-term goal of this project is to develop a calibrated IRM item bank that can be used with computerized adaptive testing and can be utilized by other researchers in the food parenting field internationally. Identification of expert panel Scientific experts were recruited to help develop the conceptual framework. Experts were defined as researchers who have either a) developed nutrition-based, family interventions aimed at treating or preventing childhood obesity and/or modifying dietary behaviors; or b) studied the role of parenting and nutrition in the etiology of childhood obesity. A list of experts was created by reviewing: 1) the membership list of the International Society of Behavioral Nutrition and Physical Activity (ISBNPA); 2) the list of attendees to the 2012 pre-ISBNPA workshop focused on improving measures of physical activity and food parenting practices; 3) recent publications on food parenting practices through searches on PubMed, ERIC, PsycINFO, and ScienceDirect; and 4) asking our network of researchers for additional suggestions. In total 32 experts were identified and 25 experts from 8 countries (Australia, Canada, Finland, Japan, Mexico, Netherlands, UK, and USA) agreed to participate (78% response rate). All experts were offered an honorarium ($150) for their participation. The sorting task included participation of 28 experts, the 25 outside experts and three primary members of the research team (TB, TMO, and SOH) who did not conduct the statistical analysis. The protocol was approved by the Institutional Review Boards at the University of British Columbia and Baylor College of Medicine. Identification, reduction and sorting of food parenting practices An overview of the methods of this study can be found in Fig. 1. Previous work by our group [4] systematically identified published food parenting instruments and supplemented the published items with additional items reported by parents to populate a food parenting practice item bank. Briefly, published articles containing at least one scale on parenting or caregiver behaviors related to 2 to 16 year old children's eating, nutrition, or food intake were extracted from 1) articles identified from two recent systematic reviews; [12,13] 2) an additional systematic review of articles published between January 2009 and March 2013 in PubMed, ERIC, PsycINFO, and Science-Direct; [4] and 3) reviewing and back-tracing the reference of articles from steps 1 and 2. The broader age range for the review compared to the ultimate target age range of the item bank (5-12 year old children) was selected in order to capture a wide range of items. A total of 79 measures were identified consisting of 1392 items measuring food parenting practices [4]. To ensure data saturation of food parenting practices for the item bank, 135 parents who reflected the socio-economic and ethnic diversity specific to Canada and the US, were surveyed by an online polling firm (YouGovPolimetrix, USA) about food parenting practices they have used or think other parents use [4]. They contributed 2240 valid (1985 unique, after removal of duplicates) food parenting practices, many that overlapped with published items [4]. To reduce the 3632 food parenting practice items identified from the published literature and parent reports and make the sorting task manageable for the experts, the binning and winnowing process, developed by the NIH PROMIS initiative was used [9,10]. "Binning," or grouping similar food parenting practice items, consisted of assigning the items from the literature review and responses from the parent survey to one of 19 primary codes and a subsequent secondary code [4]. "Winnowing" or removing redundant items consisted of reviewing each bin and consolidating redundant items. The binning and winnowing process was conducted by two research members independently with all discrepancies triangulated by two other members of the research team until a consensus was reached among all four. Two rounds of binning and winnowing of the initial 1392 items found in the literature and the 2240 parent responses took place (see previously published work for first round) [4] and the final round resulted in 110 key parenting practice concepts. A food parenting practice concept could represent a number of food parenting practice items. For example, one food parenting practice concept was "I reward my child with something tasty (e.g., dessert) as a way to get him/her to eat [food]" where food could represent [4]. **Based on content map published by Vaughn et al. [11] "healthy food", "all his dinner", or "fruits and vegetables". This concept represented a total of 43 items from the published literature or statements from parent report. The food parenting practice concepts were then grouped into food parenting practice constructs via Expert Panel sorting. The participating experts were invited to sort the 110 key food parenting practice concepts into meaningful groups or constructs using the web-based Concept Mapping software (Concept Systems Inc., Ithaca, NY). To take advantage of existing substantial conceptual interpretation of food parenting practices, each expert was provided a copy of the previously published Vaughn et al. 2016 content map [11] prior to sorting and instructed to a) utilize the framework to guide their sorting and/or b) to propose a different grouping of food parenting practice concepts. The published content framework grouped food parenting practices into 19 constructs stemming from three larger domains: control, structure and autonomy promotion based on the authors' critical appraisal of the literature [11]. In addition to sorting the concepts into meaningful groups, the experts were asked to name the groups they created. They were also instructed to not group unique practices together (i.e., create a miscellaneous group of leftover practices), but instead create groups of single food parenting practice concepts if only one practice fit within the group. The sorting conducted by the experts was reviewed to ensure that each expert sorted all 110 statements and that no miscellaneous group was formed. One expert did create a miscellaneous group of 10 food parenting practice concepts. Follow up with this expert, resulted in all those concepts being sorted into existing or new categories. Analysis Analysis of the sorting was conducted using nonparametric multidimensional scaling (MDS) [14]. A twodimensional solution was used to assign each food parenting practice concept an x/y coordinate on a point map. Food parenting practice concepts that appeared spatially closer to one another on the point map were grouped by the experts closer together and therefore may represent a similar construct. Acceptable stress values for MDS analysis typically range from 0.205 to 0.365 when used to develop a conceptual framework [15], as opposed to when used in controlled psychometric evaluations, which typically necessitate lower stress values (note that the MDS stress value for our solution was 0.267 and within acceptable range) [16]. A hierarchical cluster analysis was then conducted to identify clusters of food parenting practice concepts from the MDS derived point map. Specifically, the hierarchical cluster analysis was carried out on the x/y coordinates which were obtained from the MDS analysis. The concept mapping software utilizes the Ward's algorithm for the cluster analysis because it: 1) retains the location of the x/y coordinates in the final solution; 2) creates non overlapping constructs; and 3) merges clusters based on the distance of all individual statements instead of using the centroid of a cluster [14]. We adapted the procedure outlined by Trochim [14] to identify the appropriate number of clusters to retain in our solution. Trochim's approach to identify the number of clusters retained in the solution is iterative but essentially starts by: 1) reviewing an initial cluster solution that is derived statistically with more clusters that would be anticipated; 2) adding more clusters one at a time until it makes no theoretical sense to combine clusters; 3) qualitatively reviewing the statistical solution to refine and fine-tune the shape of the clusters; and 4) having experts review the solution and provide further input into the analyses. As we aimed to identify two potential solutions, we refined this process for the following two solutions: 1) a parsimonious solution and 2) a solution that approximated the Vaughn et al., 2016 content map [11] (referred herein as the comprehensive solution). The parsimonious solution was identified by first evaluating the simplest cluster analysis-generated solution and determining whether adding another cluster based on the cluster analysis made conceptual sense. This process iteratively continued and stopped when it did not make sense to add further clusters. This solution was not constrained by a pre-determined conceptual framework but aimed to identify a parsimonious solution, meaning we looked for larger clusters that contained related food parenting practice concepts. The solution was then examined and subjectively modified to integrate the team's consensus solution of the twodimensional point map. Specifically, the content of each cluster was examined, with emphasis on food parenting practice concepts at the border of each cluster to assess whether it could better fit with another cluster, prioritizing neighboring clusters when appropriate. This iterative process continued until the final solution was obtained. We identified the comprehensive solution by using the Vaughn et al. 2016 content map [11] to initially examine a larger than expected cluster solution. We arbitrarily started by examining the 28-cluster solution and then determined whether reducing the cluster analysis derived solution into fewer clusters made conceptual sense based on the Vaughn's content map. We proceeded until merging could no longer be supported by the framework. Again, after we identified a statistical solution (potential number of clusters to retain), we subjectively reviewed the solution to determine an optimal solution that integrated the MDS results and the subjective evaluation of the two-dimensional point map using the same procedure described above. Three members of the research team (TO, LCM & AT) independently conducted these subjective analyses and their consensus solution was presented to the primary team of investigators (SH, MB, and TB) who provided initial feedback for modification and agreed on a solution to be presented to the Expert Group. We presented the two solutions to the Expert Group who were asked to select their preferred solution and provide feedback and suggestions on that solution. One last round of modifications to both solutions was conducted based on the expert's feedback until consensus was reached among the authors. Expert sorting The 28 participating Experts sorted the food parenting practice concepts into 3-28 categories, with a mean (standard deviation) of 18.1 (6.5) and mode of 19 food parenting practice categories. Six Experts sorted the food parenting practices concepts into 19 categories, the same number as presented by the Vaughn et al. 2016 content map [11]. Of those, there was overlap in the names of 5-19 constructs (mean 14.7, stand dev 5.8) with the content map, with only two having exactly the same structure (19/ 19 constructs) as the proposed content map [11]. It is not known how many elected to use the published content guide to inform their sorting. However, in reviewing the names of categories proposed by the Experts, many used at least some of the same construct names while adding to and/or deleting food constructs for their final solution. Expert preference for proposed solutions Both the parsimonious and the comprehensive concepts map solutions were presented to the original experts who participated in the sorting task. Of the 27 eligible expert respondents (TMO was excluded because she managed the responses), 23 responded (85.2%). The comprehensive concept map informed by the published content map [11] ( Fig. 2) was preferred by 52% of experts, and another 35% preferred to present both solutions. Based on these preferences, we include the comprehensive concept map informed by the published content map within this article, Fig. 2 Comprehensive solution for food parenting statements subjectively grouped into clusters, informed by the hierarchical cluster analysis and a published framework (Vaughn et al. [11]). but have made the parsimonious solution available online in an Additional file 1. Experts reported they preferred the comprehensive solution because it was more theoretically based and the specific differentiation of food parenting practices had promise for better informing which food parenting practices were most important in influencing child eating behaviors. The most common reason for preferring to present both solutions was that the two frameworks had the potential for serving different purposes, with the comprehensive solution being more applicable to researchers in this area and the parsimonious solution being useful for those who try to operationalize promoting these practices in obesity prevention programs or policy statements. A few experts suggested future work may be able to integrate the two models into one model, with a more parsimonious global solution and detailed "sub-factors" embedded within the parsimonious constructs. The Comprehensive conceptual framework of food parenting practices The comprehensive food parenting practices concept map based on the published content map [11] resulted in an 17-cluster solution from a statistically derived 16 cluster solution (see Fig. 2 with concepts, construct names, and definitions listed in Table 1) with subjective modifications. Vaughn et al., proposed grouping food parenting practices into three larger overarching domains: Control, Structure, and Autonomy Promotion [11]. Figure 2 illustrates how the comprehensive concept map potentially supports these same three overarching dimensions. Most of the food parenting practice constructs under each dimension defined by Vaughn and colleagues' content map [11] appear to also cluster on the comprehensive concept map (Fig. 2). All four Coercive Control constructs identified on the content map were spatially close and therefore labeled to belong to Control on the comprehensive concept map: Restriction (A), Using Food to Control Negative Emotions (B), Threats & Bribes (C), and Pressure to Eat (D). One notable difference in our solution was the construct of Restriction (A) was specific for controlling weight, whereas in the content map Restriction was a more general concept. Another difference was the addition of a new construct under Control, termed Intrusive Control (E). This construct included demanding and directive concepts where the parent dictated what and how much the child should eat. These demanding and directive concepts were distinct from pressuring the child to eat more, as seen in Pressure to Eat (D), and from the guidelines and boundaries that parents set, found in the Rules and Limits construct (G) under the Structure dimension. Intrusive Control was therefore made into a new construct. It was included in the Control domain because the focus was on parents dictating to the child without child input. The proposed content map [11] identified nine constructs under Structure, of which six were identified in the comprehensive concept map solution: Rules and Limits (G), Food Availability and Accessibility (I), Food Preparation (J), Modeling (K), Meal Routines (M), and Permissive (H) (or "unstructured practices" as termed by Vaughn et al. [11].) The Availability and Accessibility construct was separated into two constructs by Vaughn et al. [11], however, the comprehensive concept map solution collapsed it into one construct. In the comprehensive concept map, there was a lack of a distinct Monitoring category in the Structure dimension as defined by the published content map, which may be due to the multiple published items on monitoring being condensed down into one monitoring concept (# 38) for this sorting task. In the solution presented here it falls into the Rules and Limit construct, but future studies will need to assess whether it should be a separate construct in the Structure dimension. Different from Vaughn's et al. content map, three new categories were identified in the comprehensive concept map under the Structure dimension: Prompt to Eat (F), Exposure to a Variety/Selection (L) and Redirection & Negotiation (N). Upon review of the solution, some experts identified similarities of the concepts clustered in Prompt to Eat to concepts clustered under Pressure to Eat. However, the two clusters were spatially separate from each other on the map. Therefore, Prompt to Eat was identified as a distinct construct from Pressure to Eat and reflected more gentle reminders for a child to eat, as opposed to pushing the child to eat beyond satiety as seen in Pressure to eat. This difference suggests that the Experts may distinguish varying degrees of how parents remind or push their child to eat and some Experts felt that Prompt to Eat was a form of Structure instead of Control. Exposure to a Variety/Selection was not identified by the published content map but concepts that clustered into this construct spatially fell into the Structure dimension on the comprehensive concept map. On face validity, these concepts may be an extension of availability, but the concepts were spatially separate from the Availability and Accessibility construct on the map and therefore made into a new construct. Redirection & Negotiation was also not a construct in the content map proposed by Vaughn et al., but does have some overlap with the content map's Limited/Guided Choices (which was not present in the solution presented in Fig. 2). Future work will need to explore the overlap and differences between these two constructs. The Autonomy Promotion dimension had the most differences between the comprehensive concept map and the previously proposed content map [11]. Similar P: Encourage Health Eating Non-directive methods to suggest that the child try or eat a healthy food, but is not forceful and does not have consequences associated with child not following through. These non-directive methods include gentle verbal cues or reminders, non-verbal methods by making food more appealing or interesting for child. It also includes promoting self-regulation of intake by children to not eat beyond satiety. ( to the proposed framework, Child Involvement (O) was a distinct construct under Autonomy Promotion. However, the two proposed constructs of Praise and Encouragement were combined into a single Encourage Healthy Eating (P) construct, while the two proposed Nutrition Education and Reasoning constructs were combined into a single Education/Reasoning (Q) construct. Lastly, the proposed construct Negotiation, which Vaughn et al. suggested belonged in Autonomy Promotion [11] was instead collapsed with Redirection in the Structure dimension (Fig. 2). Five concepts (13,26,27,83, and 87- Fig. 2) were grouped with clusters spatially removed from their closest cluster on the point map, because the research team deemed they fit better conceptually. Food parenting practice concepts 13 (I let my child season the vegetables, such as adding ketchup or cheese sauce, to make them taste better) and 83 (If my child does not want to taste a food, I do not try to make him/her eat it), were spatially closest to Rules and Limits and Permissive Feeding, respectively. However, the team proposed both concepts fit better into Child Involvement, which includes concepts that allow the parent to consider their child as an individual when motivating them to eat more nutritious foods. Concepts 26 (I show enthusiasm about eating healthy foods.) was spatially within Child involvement (P), but was moved into Modeling (M), to capture the concept of enthusiastic modeling [17], as per recommendation of Experts. Concept 27 (I hide or intentionally keep less [healthful food/drinks] out of my child's reach) has sometimes been classified as a form of covert control, but several experts felt it better fit into J: Availability/ Accessibility. Concept 87 (I criticize my child about the food s/he eats.), was initially grouped with Pressure to Eat concepts, but it did not promote eating more food like the other concepts in Pressure to Eat. It was therefore moved to the adjacent new construct Intrusive Control which focused on directive and intrusive parental control of their child. This refers to the cluster the item was assigned based on the original 16-cluster solution identified by the cluster analysis. This is visually depicted in Fig. 2 as the gray shadow clusters and illustrates how many subjective changes were made to generate the proposed 17 cluster solution The parsimonious conceptual framework of food parenting practices The parsimonious solution was derived from the 4cluster statistical solution which expanded to a 7-cluster solution after it was subjectively reviewed and endorsed as part of the consensus process. The final model can be found in Additional file 1: Figure A (online) with construct names, definitions and corresponding concepts found in Additional file 1: Table A (online). The subjective separation of clusters was performed because the increased number into 5, 6 or 7 clusters resulting from the hierarchical cluster solution did not fit based on face validity. Instead, subjective modifications to the 4-cluster solution were based on the research team's current understanding of the published literature. The first modification was due to one of the 4 clusters containing unique concepts for different forms of coercive control (e.g. punitive restriction and pressure to eat). Prior research suggests that parents use pressure to eat more with picky eaters or underweight children, and it has been associated with lower weight status among children in several cross-sectional and longitudinal studies [18][19][20][21]. On the other hand, restriction has been more commonly associated with higher child weight status in crosssectional and longitudinal studies and may be a response to children who are heavier or are more food responsive [18][19][20][21]. These divergent outcomes associated with pressure to eat and restriction suggested these two constructs may be conceptually different and should be measured independently of each other. Therefore, all the concepts in this group that reflected pushing children to eat more (whether coercive or not) were moved to the Pressure to Eat construct (Cluster 1). All the concepts that reflected the use of punishment or coercion to restrict the amount that children could eat remained in the Restriction construct (Cluster 2). The next modification involved a large cluster that emerged from the 4 cluster solution which contained concepts related to parental Rules and Expectations (Cluster 4), along with two concepts (concepts 22 and 23) that theoretically did not belong with the others. These two concepts on the border of the cluster were more consistent with the idea of Emotional Feeding, first identified by Wardle et al. [22], and were therefore separated into a different construct named Emotional Feeding (Cluster 3). The statistically derived 4-cluster solution included one large cluster that combined concepts for creating structure for a child with indulgent food parenting practices. Indulgent feeding style has consistently been associated with higher child weight status in cross-sectional [23] and recently in a longitudinal study [24]. However, structure is believed to be protective from excessive weight gain among children and for ensuring adequate consumption and growth for children with low weight status. It was considered whether these constructs are at the opposite ends of one spectrum, but we believed it is possible that parents can be indulgent with or without structure. The last modification therefore involved separating these two constructs into Indulgence (Cluster 5) and Structure (Cluster 6). The final cluster identified in the statistically derived 4-cluster solution contained strategies that involved parental Active Encouragement for Nutritious Eating by their child, and remained intact (Cluster 7). Based on expert input on the solution, items that may require further evaluation for fitting within each construct are identified for future studies in Additional file 1. A comparison of the two solutions can be found in Additional file 1: Figure B (online). Discussion An international expert panel of researchers involved in food parenting practices research helped guide the development of a new Concept Map for Food Parenting Practices. Both of the final two Food Parenting Practices Concept Maps presented here were derived from a MDS point map of the spatial relationships of 110 food parenting concepts based on the sorting task of 28 food parenting experts from around the world. One of the concept maps (Fig. 2, Table 1) is a subjective clustering of the food parenting practice concepts point map that retains a more comprehensive structure and was informed by the developmental psychology literature cited by Vaughn et al. [11] This comprehensive concept map should allow researchers to evaluate the impact of each of the proposed food parenting practice constructs on child outcomes, how parents use these practices in combination [25,26], and whether child characteristics moderate the impact of each construct. This detailed solution will also allow researchers to select specific constructs when testing hypothesis or developing or evaluating interventions. The other concept map (Additional file 1: Figure A and Table A) was informed by the hierarchical cluster analysis solution and took a parsimonious approach as we aimed at identifying fewer clusters or constructs. This latter approach may help scientists enhance their measurement of food parenting practice by reducing the burden of measurement, while assessing more global and potentially predictive constructs. After they were given the opportunity to review both solutions, over half of the experts preferred the comprehensive solution. They felt this model allows for a better distinction of which constructs are most predictive of child behavior and health outcomes and have a greater impact to move this area of research forward. Both the comprehensive and parsimonious solutions required subjective modifications to the statistically derived cluster solutions from the MDS point map. The difficulty in interpreting any of the hierarchical cluster analysis solutions without modifications, suggest that there was not great consensus among these experts for how to conceptualize a framework for food parenting practice, despite being provided with a published content map that several of the investigators and experts in this study helped develop. Of note, the cluster analysis of the MDS solution of only those experts that participated in the development of the published content map [11] and this sorting task was also explored, with no clearer solution apparent. One expert suggested that the comprehensive solution may be sub-factors within the more global parsimonious solution. Unfortunately, the current solutions do not fully support this as illustrated in Additional file 1, where there is not always clear overlap between the constructs defined in the two solutions. It is possible that future studies can help further refine both solutions such that the relationships between the two can be better delineated. In this study, an international group of experts helped develop a concept map for food parenting practices using a systematic approach to identify the food parenting practice concepts, by allowing them to sort the concepts into categories and interpret their sorting using statistical analysis. This is distinct from the approach taken to develop Vaughn et al.'s content map [11], for which an overlapping group of experts were asked to collaboratively propose a framework for food parenting practices based on their own research and review of the literature. This published content map currently lacks validation. While the intent of this study was not to validate the published content map, the team felt it was important to allow the experts access to it. Since the food parenting content map was not published at the time the experts were asked to complete the sorting task, it was provided to allow them to use all possible resources. They were instructed to use the framework only if it worked with their own conceptual approach to the sorting task. It is not known how many elected to do so. The work presented here was based on an item bank developed from published instruments of food parenting practice instruments systematically identified in 2013 [4]. Since that time, additional studies have been published that adapted or tested the psychometrics of food parenting practice scales already included in the item bank, to new populations [27][28][29][30][31]. In addition, several important new instruments of food parenting practices have been published that could not be included in the item bank to inform the Experts' tasks. These include the Parental Feeding Practices (PFQ) scale for Mexican American families [32], the Vegetable Parenting Practice scale [33], Feeding Practices and Structure Questionnaire (FPSQ-28) [34], and the Structure and Control in Parent Feeding (SCPF) [35]. However, several of these instruments were developed based on previously published scales and had much overlap with items already included in the item bank and would likely integrate with the concepts we identified. The extent to which these newer items fit within our existing concepts will be tested empirically in the future. The goal of this study is ultimately to improve the measurement of food parenting practices to allow for a more standardized assessment of food parenting practice constructs and better comparisons of results across studies. The team is currently iteratively developing items to cover the constructs presented in the comprehensive food parenting practice solution using existing or modified items from published scales. Future work will include testing the resulting questionnaire in English with parents via cognitive interviews, and then assessing parents' use of the food parenting practices in a large cross-sectional study for classical test theory and advanced psychometric analysis (e.g. item response modeling). This will allow psychometric testing of the proposed comprehensive model. If the comprehensive model is a poor fit which cannot be improved with minor alterations, the parsimonious model will be tested instead. Ultimately, the goal is to create a calibrated item bank that can be used for computer assisted testing in observation and interventions studies. The psychometric analysis will help achieve this by assessing whether items are stable across participant characteristics (e.g. income and education) via differential item functioning within a Canadian sample. Future work will need to assess the stability of items across different cultural racial and ethnic groups and in different languages. The improvements in measuring food parenting practices will hopefully result in more consistent use of instruments across studies and better understanding of the impacts of food parenting practices on child outcomes, and whether child characteristics or behaviors moderate these findings. Strengths and limitations This study has several strengths including a systematic approach to identifying food parenting practice concepts, engaging an international sample of experts in food parenting and child feeding in a sorting task, and using both quantitative and subjective approaches to interpret the resulting point map solution of food parenting practices. Limitations of this methodology should also be acknowledged. The item bank is based on instruments published before March 2013, and therefore does not include instruments developed after this time. Experts varied by how much they relied on the previously published content map [11] to inform their own sorting, which may have influenced the final analysis. The cluster analysis derived solution of food parenting practice clusters also suggested there was variability in how the Experts operationalized each construct. Another limitation in measuring food parenting practice concepts is they are typically operationalized as unidirectional behaviors of parents aimed at their child. This may imply the assumption that we can fully understand the impact of food parenting practice on child dietary behavior by applying the Concept Map in observational and intervention studies. It should be acknowledged that food parenting practice is embedded in a more complex family context, including the back-and-forth interactions between a parent and a child, as well as sibling and marital relationships, with reciprocal influences between all family members. Thus, it is likely that food parenting practices can be best understood in the context of interactions within the family as a whole. Conclusion In summary, the comprehensive food parenting practice concept map derived from the experts' sorting of food parenting practice concepts provides a conceptual map and the roadmap for the selecting and developing of items for each construct. These will in turn be tested to eventually develop a calibrated item bank of food parenting practices, which will help standardize the measurement of food parenting practice in future observational and intervention studies.
v3-fos-license
2018-12-17T04:39:41.950Z
2016-07-31T00:00:00.000
56405669
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/amp/2016/1810795.pdf", "pdf_hash": "e083b63df3a4c444f32a44fcdfb9bfdbccfe4db2", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41755", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "sha1": "e083b63df3a4c444f32a44fcdfb9bfdbccfe4db2", "year": 2016 }
pes2o/s2orc
Explicit Solutions of the Boundary Value Problems for an Ellipse with Double Porosity The basic two-dimensional boundary value problems of the fully coupled linear equilibrium theory of elasticity for solids with double porosity structure are reduced to the solvability of two types of a problem. The first one is the BVPs for the equations of classical elasticity of isotropic bodies, and the other is the BVPs for the equations of pore and fissure fluid pressures.The solutions of these equations are presented by means of elementary (harmonic, metaharmonic, and biharmonic) functions. On the basis of the gained results, we constructed an explicit solution of some basic BVPs for an ellipse in the form of absolutely uniformly convergent series. Introduction In a material with two degrees of porosity, there are two pore systems, the primary and the secondary.For example, in a fissured rock (i.e., a mass of porous blocks separated from one another with an interconnected and continuously distributed system of fissures), most of the porosity is provided by the pores of the blocks or primary porosity, while most of permeability is provided by the fissures or secondary porosity.When fluid flow and deformation processes occur simultaneously, three coupled partial differential equations can be derived [1,2] to describe the relationships governing pressure in the primary and secondary pores (and therefore the mass exchange between them) and the displacement of the solid. A theory of consolidation with double porosity structure was proposed by Wilson and Aifantis [1].The physical and mathematical foundations of the theory of double porosity were considered in the papers [1][2][3], where analytical solutions of the relevant equations are also given.This theory unifies a model proposed by Biot for the consolidation of deformable single porosity media with a model proposed by Barenblatt for seepage in indeformable media with two degrees of porosity.The basic results and the historical information on the theory of porous media were summarized by De Boer [4].However, Aifantis' quasi-static theory ignored the cross-coupling effect between the volume change of the pores and fissures in the system.The cross-coupled terms were included in the equations of conservation of mass for the pore and fissure fluid and in Darcy's law for solids with double porosity structure by several authors [5][6][7][8]. Porous media theories play an important role in many branches of engineering, including material science, petroleum industry, chemical engineering, and soil mechanics, as well as biomechanics.In recent years, many authors investigated the BVPs of the 2D and 3D theories of elasticity for materials with double porosity structure, publishing a large number of papers (some of these results can be seen in [9][10][11][12][13][14][15][16][17] and references therein).In those works, the explicit solutions on some BVPs in the form of series are given in a form useful for the engineering practice. In the present paper, the basic two-dimensional boundary value problems of the fully coupled linear equilibrium theory of elasticity for solids with double porosity structure are reduced to the solvability of two types of a problem.The first one is similar to the BVPs for the equations of classical elasticity of isotropic bodies, while the second one is the BVPs for the equations of the pore and fissure fluid pressures.The solutions of these equations are presented by means of elementary (harmonic, metaharmonic, and biharmonic) functions.On Basic Equations Let = ( 1 , 2 ) be a point of the Euclidean 2D space 2 .In what follows, we consider an ellipse with a double porosity structure that occupies the region Ω of 2 . The system of homogeneous equations of the linear equilibrium theory of elasticity for solids with double porosity structure can be written as follows [9]: where u = ( 1 (x), 2 (x)) is the displacement vector in a solid, 1 (x) and 2 (x) are the pore and fissure fluid pressures, respectively, 1 and 2 are the effective stress parameters, > 0 is the internal transport coefficient and corresponds to the fluid transfer rate with respect to the intensity of flow between the pores and fissures, , , 1 , 2 are all constitutive coefficients, = / ( = 1, 2), 12 = 12 / , 21 = 21 / , is the fluid viscosity, 1 and 2 are the macroscopic intrinsic permeabilities associated with matrix and fissure porosity, respectively, 12 and 21 are the cross-coupling permeabilities for fluid flow at the interface between the matrix and fissure phases, and Δ is the two-dimensional Laplace operator.We consider the vectors as column matrices, if necessary.Throughout this paper, it is assumed that 2 1 + 2 2 > 0. Superscript "" denotes transposition.We will suppose that (3) Note that BVPs for system (2) which contain 1 (x) and 2 (x) can be investigated separately.Then if supposing (x) as known, we can study BVPs for system (1) with respect to u(x).By combining the obtained results, we arrive at explicit solutions of BVPs for systems (1)- (2).Obviously, from system (2), we have the following equations for 1 (x) and 2 (x): where It is easy to see that the solutions of system (2) have the form where First, we assume to be known.Then, for u(x), we get the following nonhomogeneous equation: It is well known that the general solution of (8) has the form where k is a general solution of the equation Δk + ( + ) grad div k = 0 (10) and k 0 is a particular solution of the nonhomogeneous equation here, Boundary Value Problems For systems ( 1) and ( 2), we pose the following BVPs.Find in domain Ω = {0 ≤ < 1 , 0 ≤ < 2} a regular solution U(u, 1 , 2 ) ∈ 2 (Ω) to systems ( 1) and ( 2) satisfying one of the following boundary conditions (see Figure 1): Here and are given functions.Note that when stress components (ℎ 2 0 /) and (ℎ 2 0 /) are given on boundary = 1 , then the BVP is solvable if the principal vector and the principal moment of external stresses are equal to zero. Clearly, BVPs for (2) can be investigated separately (Problem B).Then, by admitting 1 (x) and 2 (x) as known, we can study BVPs for system (1) with respect to u(x) (Problem A).By combining the obtained results, we arrive at the explicit solutions of BVPs for systems (1)-( 2) and ( 21). The components of stress vector ( 17) can be written as Let us rewrite conditions (25) when = 1 in the following equivalent form: In (28), functions 1 and 2 are harmonic functions, which, by using the method of separation of variables [23,24], can be presented as follows: where By substituting ( 22)-( 24) and ( 29) into ( 28), for determining unknown 1 and 2 , we get the following infinite system of algebraic equations, whose main matrix has a block-diagonal form (see Figure 3). Let us assume that harmonic function from ( 18) is represented in the form of series: Taking into account the boundary conditions (31), to determine unknown coefficients and , we obtain the system of infinite algebraic equations, whose main matrix has a block-diagonal form (Figure 3).By solving this system, we find (, ). For determining 0 from ( 17), let us consider the particular cases: keeping in mind the homogeneous boundary conditions, we get = 0 or = 0. Let us assume that = 0. Substituting (32) in ( 17), we obtain the following equation: The solution of (33) is sought in the following form: where 0 is a harmonic function and it can be obtained in the same way as .Thus, we have 0 is the particular solution of the following equation: The solution of (36) is sought in the form Substituting (37) in (36), we obtain the relations between and and hence it follows that or where = + . Quite similarly, we obtain the solution, when = 0.By combining the obtained results, we obtain an explicit solution of (17). Remark 1.When we have the nonhomogeneous boundary conditions, then the coefficients in (32) are different from zero, ̸ = 0, ̸ = 0, and function 0 will be obtained by above-mentioned method for each series, separately. Conclusions The main results of this work can be formulated as follows: (1) The system of equations of the linear equilibrium theory of elasticity for solids with double porosity structure is written in terms of elliptic coordinates. (2) The problems are reduced to the solvability of two types of problem.The first one is similar to the BVPs for the equations of classical elasticity of isotropic bodies, while the second one is the BVPs for the equations of the porous and fissure fluid pressures. (3) Analytical (exact) solutions are obtained for 2D BVPs for the ellipse with double porosity structure. By using the above-mentioned method, the following is possible: (4) It is possible to construct explicitly the solutions of basic BVPs for systems (1) and (2) for simple cases of 2D domains (circle, plane with circular hole.) in the form of absolutely and uniformly convergent series that are useful in the engineering practice. (5) It is possible to obtain numerical solutions of the boundary value problems. (6) It is possible to construct explicitly the solutions of basic BVPs of the systems of equations in the modern linear theories of elasticity, thermoelasticity, and poroelasticity for materials with microstructures and for elastic materials with double porosity for a circle, and so forth.(7) In practice, such BVPs are quite common in many areas of science.The potential users of the obtained results will be the scientists and engineers working on the problems of solid mechanics, micromechanics and nanomechanics, mechanics of materials, engineering mechanics, engineering medicine, biomechanics, engineering geology, geomechanics, hydroengineering, applied and computing mechanics, and applied mathematics. The coordinate curves are ellipses and hyperbolas: (A.1) √ sinh 2 + sin 2 are metric coefficients (Lamé coefficients).The Laplacian operator in the elliptic coordinates has the form The operator grad has the form where = (, ) . Advances in Mathematical Physics The Helmholtz equation can be written as where = (, ) , and biharmonic equation has the form B. Solution of Helmholtz Equation in the Elliptic Coordinates Helmholtz equation in the elliptic coordinateshas the following form: Let us solve this equation by using the method of separation of variables.Let function be sought in the form and then Helmholtz equation takes the following form: From here, we obtain Let us rewrite the last equation in the following form: From here, we get Take into account the following identities: We obtain From here, we get C. The Solutions of Differential Equation of Mathieu Let us assume that = 0; then := ( 1 / √ 2) 2 and fl (1/4) 1 2 .The differential equation of Mathieu takes the form The solutions of this equation have the following forms [25]: where, for 2 (, ), D. The Solutions of the Modification Differential Equation of Mathieu The modification differential equation of Mathieu has the form E. Physical Motivation of Double Porosity Model The double porosity model has received a lot of attention from mathematicians and from engineers.In such a model, there are two pore systems with different permeability.There is first a set of isolated porous blocks of low permeability (sometimes called matrix), surrounded by network of high permeability connected porous medium (usually called fractures network).Pores are pervasive in most of the igneous, metamorphic, and sedimentary rocks in the earth's crust.In fact, porosity found in the earth may have many shapes and sizes, but two types of porosity are more important.One is the matrix porosity, and the other is fracture or crack porosity which may occupy very little volume, but fluid flow occurs primarily through the fracture network. In physical terms, the theory of poroelasticity postulates that when a porous material is subjected to stress, the resulting matrix deformation leads to volumetric changes in the pores.The pores are filled with fluid.The presence of the fluid results in the flow of the pore fluid between regions of higher and lower pore pressure. The physical process of coupled deformation is governed by the following equations: (1) The equations of motion [27,28]: , = ( ü − l ) , , = 1, 2, 3, (E.1) where is the components of total stress tensor, > 0 is the reference mass density, and is the body force per unit mass.We assume that subscripts preceded by a comma denote partial differentiation with respect to the corresponding Cartesian coordinate, repeated indices are summed over the range (1,2,3), and the dot denotes differentiation with respect to (here denotes the time variable; ≥ 0). 3 ⋱Figure 3 : Figure 3: Diagram of the main matrix of the algebraic equation system.
v3-fos-license
2021-10-15T15:27:01.334Z
2021-01-01T00:00:00.000
238998424
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ccsenet.org/journal/index.php/jas/article/download/0/0/46082/49086", "pdf_hash": "79d7d9725e65a16dd928e383e487474a40ce4178", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41756", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Biology" ], "sha1": "4936488d56c02ffbc607ea1eee4afc3374804d40", "year": 2021 }
pes2o/s2orc
Fungi Resistance to Multissite Fungicides Multisite fungicides have been used for many years in fruit and vegetable crops worldwide. Cases of the fungi resistance development to these fungicides have been rare. From the 2002 season onwards, with the outbreak of Asian soybean rust in Brazil, caused by Phakopsora pachyrhizi, site-specific fungicides became the main weapon for its control. From 2002 to 2011, penetrant mobile site-specific fungicides were used and until today in double (DMI + QoI) or triple (DMI + QoI + SDHI) co-formulatoons in an area of more than 30 million hectares and with three sprays per area. This resulted, as expected, in the fungus sensitivity reduction, today with cross and multiple resistance to those site-specific fungicides. From the 2011 season in an attempt to recover control that for some chemicals and mixtures reached < 30%, research was started with site-specific + multi-site mixtures, taking as example Phytophthora infestans resistance development to metalaxyl in Europe showinig long-lasting solution found by the addition of multisite mancozeb. It is expected that the effective life of site-specific + multi-site mixtures may be as long in controlling soybean rust as it has been for potato, tomato and grape downy mildews. This review presents the concepts involved in the sensitivity reduction to fungicides. Some fungal species and fungicides involved are listed. Considering the P. pachyrhizi sporulation potential, the great soybean area sprayed and the number of sprays per area mainly with site-specific co-formulations and the reduced area sprayed with multisites, we discuss the need for annual monitoring of P. pachyrhizi sensitivity to the these chemicals. From the 1970s, the resistance of phytopathogenic fungi became a problem with the predominant use of mobile-penetrating fungicides that were site-specific (Klittich, 2008). On the other hand, the resistance of fungi to multisite fungicides (arylaminopyridine, chloronitriles, dithiocarbamates, copper, tin and mercury derivatives, phthalimides, sulfur, etc.) is still a rare event. The difficulty is due to the low probability of the occurrence of the minimum necessary number of mutations at different loci in the same fungus. On the contrary, with the introduction and repeated use of site-specific, acquired resistance has become common but incomparable to multisite (van den Boch & Gilligan, 2008). Fungicides Fungicides are synthetic or natural chemical compounds, or biological organisms capable of killing or inhibiting fungi, or the germination of fungi and oomycete spores (Mueller et al., 2013). Fungitoxicity Fungitoxicity is the property that a chemical substance has of being toxic to fungi and stramenopylae (pseudo fungi or chromists) in low concentration. This property is a molecule attribute. Mode of Action, Mechanism of Action or Biochemical Mechanism of Action The chemical structure of the fungicde active ingredient (a.i.) defines its mode of action by determining its uptake, movement in the plant, and its ability to reach and bind to the site of action-the physical location where the fungicide acts (Delp & Dekker, 1985). Mode of action is the process by which a chemically active substance produces an effect on a living organism or on a biochemical system. Or, the mechanism refers to the biochemical interaction through which the substance produces its toxic effect (Hewtt, 1988;Latin, 2017;Mueller et al., 2013). Site of Action Site of action, or target site, are specific enzymes in cellular processes to which the fungicide binds (Hewitt, 1988). Sensitivity (of Sensitive, That Feels) Property of the fungus to receive changes from the environment and to react to them. Sensitivity is an attribute of the fungal species (Reis et al., 2019). Insensitivity Not all fungi are sensitive to all fungicides (spectrum of action); some are always insensitive to certain molecules. For example, fungi of the genera Alternaria, Bipolaris, Curvularia, Drechslera, Exserohilum are insensitive to benzimidazole fungicides; on the other hand, benzimidazoles are not fungitoxic to these genera. Another example is the insensitivity of oomycetes, which cause mildews, to triazoles and benzimidazoles (Reis et al., 2019). A fungus sensitive to a fungicidal molecule may have altered sensitivity, which is why it is said to have developed resistance. However, an insensitive fungus will never become sensitive. Control Failure The resistance of plant pathogenic fungi to fungicides is observed as a control failure or as a reduction in the performance of the fungicide; in this situation, farmers often react by increasing the dose and/or by reducing the interval between sprayings. In the next step, field experiments confirm the control failure. Situation in which the farmer observes that, when compared to previous crops, the fungicide efficiency was reduced. He says that there was a "failure of control" and starts to complain and seeks explanations for the fact (Reis et al., 2019). Loss of Sensitivity The word loss implies total insensitivity, which is not always true. Nevertheless, the concept of loss can be delimited following the Edgingnton & Klew (Edgington & Kew, 1971) criterium. Thus, it can be considered as sensitivity loss, or non-toxic, when the fungus presents inhibitory concentration, IC 50 , > 50 mg/L to a fungicida, and when lower than 50%, sensitivity reduction. Sensitivity Reduction Reduction is a slow process, requiring the application of a site-specific fungicide for many seasons and over a large area such as P. pachyrhizi and the DMIs (FRAC group 3, demethylase inhibitors), QoIs (Group 11, quinone outside inhibitors) and SDHIs (Group 7, succinate dehydrogenase inhibitors) fungicides. The reduction is present when the inhibitory concentration (IC 50 ) increase over time for the mycelial growth, spore germination or disease control. Therefore, in most cases, what is happening is a slow reduction instead of sensitivy loss. Molecular techniques are useful in proving the presence of reduced sensitivity after resistance has been quantified in laboratory bioassays (Hollomon, 2015). Erosion of the Fungicide Expression taken as a synonym for reduced sensitivity of a fungus to a given fungicide (Hahn, 2014). Resistance Fungicide resistance is the result of the adaptation of a fungus to a fungicide due to its stable hereditary genetic alteration leading to the emergence and spread of mutants with reduced sensitivity to the fungicide (Delp & Dekker, 1985). The term proposed by FRAC (2019) refers to a stable and hereditary adjustment of a fungus to a fungicide, resulting in a reduction in the pathogen sensitivity. This adjustment results in a 'considerable' reduction in the sensitivity of the pathogen to the chemical compound, which can be partial or total, always with an increase in the IC 50 [sensitivity reduction factor (SRF)] > 1.0. This ability is gained through evolutionary processes (Mueller el al., 2013). Cross-Resistance Fungicides of the same chemical group, for example tebuconazole and cyproconazole, have different chemical structures. However, both have the same toxicity to fungi. Therefore, both are considered demethylation inhibitor fungicides (DMI), a name that expresses the same-shared mode of action. This fact means that even if you rotate two within the same fungicide group, the fungus detects them as being the same fungicide. It also means that if resistance develops for one member of the group, it will be present for all other members of that family. The resistance is called crossed reaching all group members (EPPO, 1998). Multiple Drug Resistance (MDR) It is the sensitivity reduction to various fungicides with different modes of action shown by a fungus specie. MDR is defined as the acquired sensitivity reduction of at least one fungus to at least three fungicides with a distinct mechanism of action. The main resistance mechanism involved here is the overexpression of the efflux transporter genes present in the plasma membrane. It results in increased cellular expulsion of the fungicide reducing the fungus sensitivity to several unrelated fungicides (Alekshun & Levy, 2007;Chapman et al., 2011;Chen et al., 2017;Hahn & Leroch, 2015;Leroux et al., 2002). Acquired Resistance It refers to a fungus that in the wild state was sensitive to the fungicide and that developed resistance after exposure to the chemical (Hollomon, 2015). This is what happens with fungicides applied to control diseases in the field. Site-Specific, Monosite or Unisite Fungicide Of the millions of biochemical reactions that take place in the fungus cell, the site-specific fungicide (monogenic resistance) interferes with only one biochemical site (an enzyme). This is a vital enzyme for the fngus physiology, so if it is blocked, the fungus will die. Fungicides with a site-specific mode of action are at high risk for the development of resistance compared to multiple-site fungicides (Mueller et al., 2013). Multisite Fungicide It refers to the fungicide that paralyzes at least five metabolic processes of the fungus (Mueller et al., 2013). For this reason, the development of resistance to them has not yet been frequently reported. Site-Specific and Penetrant Mobile Many use these two terms considering that all site-specific are penetrant mobile, however iprodione is a site-specific, signal transduction inhibitor, non-penetrating with protectant and some eradicant activity (PPDB, 2021). Fungicide Effective Life The effective life of a fungicide is the time from its introduction on the market for use in the control of a given fungus, until the moment when efficient control is no longer obtained due to the development of resistance of the target fungus (Hobbelen et al., 2011). Mechanisms of Fungi Resistance to Fungicides How do fungi defend theselves against fungicides? There are four main mechanisms by which fungi become resistant to fungicides. For a better understanding of the mechanisms, the functions of cell organelles involved in the defense mechanisms of fungi are briefly reviewed. Plasma Membrane The cell, or cytoplasmic, membrane is a biological membrane that has selective permeability to organic molecules and ions. Controls the movement of substances into and out of the cell. This membrane is made up of a double layer of phospholipids and interspersed with proteins embedded in it. The membrane is said to be semi-permeable in that it can let a substance (molecule or ion) pass freely, pass through a limited form, or pass at all. Membranes also contain receptor proteins that allow cells to detect external signaling molecules such as hormones (Ishii & Hollomon, 2015). The main mechanisms of fungi resistance to fungicides are: (a) Substance transport across the plasma membrane: To reach the intracellular organelles, the fungicide has to cross the plasma membrane with a complex constitution. There are three forms of transport across the cell membrane (Alekshun & Levy, 2007;Ishii & Hollomon, 2015;Ward et al., 2006). The movement of substances across the membrane can be passive by simple diffusion, or by diffusion facilitated by the transport proteins channel following a positive concentration gradient and active, with energy consumption (ATP) against a concentration gradient. (b) Change in target site reducing sensitivity to fungicide: The most common mechanism of resistance is a change in target site (enzyme) in the fungus and occurs only with site-specific fungicides, which dominated the market after 1970. Multisite fungicides, most of those developed since 1969, are not prone to the development of resistance at the target or action site. As the fungus grows, its DNA is replicated when new cells are created. This replication process is imperfect and errors can occur. Such errors are known as mutations. DNA is the code used to produce enzymes in the cell, and some mutations result in a change in the target site's amino acid sequence which in turn alters the shape of the receptor site (lock) of the fungicide. Thus the (key) fungicide may not fit into the site (lock) resulting in a partial or total reduction of the fungus' sensitivity to the fungicide. Therefore, an alteration by mutation in the fungicide target of action reduces the drug fitness through this target site, resulting in reduced sensitivity (Ishii & Hollomon, 2015). (c) Gene overexpression: Gene overexpression is the abnormal production of large amounts of a substance which is encoded by one or more genes. In the case of overexpression, the target enzyme does not undergo any change (mutation). Instead, the pathogen produces it in large quantities (Cools et al., 2012). For example the overexpression of the Cyp51 gene. Azole fungicides (DMIs) inhibit the Cyp51 gene encoding the demethylase enzyme involved in the ergosterol biosynthesis process. The fungus to defend itself from the effect of the fungicide increases the production of the enzyme in order to produce much more enzyme (demethylase) so that ergosterol is still produced even in the presence of the fungicide. Due to the increased production of the enzyme, the amount of fungicide present in the cell is not enough to couple with all the available enzyme, completely blocking the production of ergosterol. This leaves an amount of free enzyme without the coupling of the fungicide, producing enough of the ergosterol to keep the cell alive. In this case, the amount of fungicide is not enough to completely inhibit ergosterol production. Gene overexpression results in greater production of demethylase beyond normal. Therefore, even though triazole inhibits part of its synthesis, there is still an amount of enzyme remaining, maintaining the cell's functional activity (Alekshun & Levy, 2007;Hahn & Leroch, 2015;Leroux et al., 2001;Price et al., 2015;Ward et al., 2006). (d) Exclusion of the fungicide from the cell: The cell's efflux is the elimination of a certain substance from its interior to the outside. Active efflux is a condition where pathogen cells pump the fungicide out of the cell faster than it accumulates to a toxic concentration. However, the target site remains unchanged. Active efflux prevents the accumulation of sufficient concentration to stop cell function and fungus growth. Unlike influx, entry to the interior of the cell, efflux pumps occur naturally in cells that exclude or expel foreign substances or import substances useful to their metabolism. In fungi, the most common efflux pumps are protein transport pumps. Occasionally, these transporters succeed in expelling sufficient amounts of the fungicide from within the cell. Transport or carrier proteins in the plasma membrane are responsible for the active efflux of foreign material, including fungicides (Price et al., 2015). There are two types of efflux: (i) Passive efflux is the expulsion of a certain substance to the outside of a cell (movement, or passive diffusion). (ii) There is also the participation of an efflux pump, which consists of the active pumping of the fungicide from the intracellular to the extracellular environment, that is, the active efflux. Efflux pumps are transmembrane proteins that can act to expel fungicides against a concentration gradient. There may also be an oveexpression of efflux pumps consisting of an increase in the concentration of their number (Hahn & Leroch, 2015). Multiple drug resistance is related to the overexpression of transport proteins. (e) Detoxification or molecule inactivation by thiols overproduction: Substances that inactivate molecules of fungicides, has been suggested as the most likely mechanism that confers fungal resistance to mancozeb (Barak & Edgingtom, 1984;Gilpatrick, 1982;Yang et al., 2019). However, the mechanisms that confer resistance to this fungicide are questionable and very complex to be clarified. The main genomic and molecular study was carried out with the yeast Saccharomyces cerevisiae, having determined 286 genes that would be involved with resistance to a xenobiotic (Dias et al., 2009). Many papers have been published on the fungi resistance to multisite fungicides, some were selected as an example (Table 1). Final Remmarks Although, fungal resistance to iprodione, a nonpenetrant fungicide, has been reported, it was not included in this review due to its site-specific mode of action. The number of site-specific fungicide molecules marketed is considerably higher than multisite. From the 1970s onwards, site-specifics dominate the world market, being used to control diseases in a greater number of plant species, in a larger area and with a greater number of sprayings per season, in addition to their high risk to resistance development. This has resulted in the largest number of citations of site-specific resistance. It is likely that for all commercialized, both multisite and site-specific fungicides, regardless of their active principle and resistance mechanism, at least one fungus resistant to them has already been reported. However, as site-specifics dominate the market, the large volume published focuses on this group. Based on the consulted literature, even new site-specific mechanisms of action developed in the future will have the potential to select, in a few seasons, fungi resistant to them, shortening their effective life. Although, Bordeaux mixture is considered the oldest foliage protectant fungicide, developed in 1885, no resistance of Phytophthora infestans (Mont.) de Bary to this fungicide was found in the consulted literature. Perhaps it is the fungicide with the longest effective life in the history of downy mildew chemical control on potatoes, tomatoes and grape. Are the cuprics the hardest to be defeated by fungi? In this sense, in the consulted literature only two reports of reduced sensitivity of fungi to cupric fungicides were found, but to a large number of species of phytopathogenic bacteria (Lamichane et al., 2018). In the available literature, no reports were found on rust fungi resistant to mutissites. It would also be important to determine the time required since it first use to the emergence of resistance or the duration of their effective life. According to the FRAC (2019), fungal resistance to pencycuron (phenylurea; recommended for the control of Rhizoctonia solani in the treatment of potato tubers) and to tricyclazole (triazolobenzothiazole) penetrant-mobile for the control of Pyricularia oryzae Cav in rice has not yet been reported. Should the chemical industry continue to synthesize site-specific fungicides, as it has been doing intensively, even with a relatively short effective life as is happening with the new carboxamides towards Phakopsora pachyrhizi Sydow & Sydow. In Brazil, the greatest use of multi-site fungicides (chlorothalonil, mancozeb and copper oxychloride) has been in soybean crop, to control P. pachyrhizi, the causal agent of Asian rust. Its use began in 2010/11, therefore beeing used in the last 10 seasons. To give an idea of the selection pressure that multi-sites are subject in soybean crop, in the 2020/21 season, the area cultivated with soybeans was > 38 million hectares, with 2.6 sprayings/ha, but with multi-site in an area of only 12%. What can happen with these multi-sites in the control of soybean rust under this situation? Should multissítes be used alone for ASR control? At the moment, in Brazil, the use of multisite is the main weapon to face the development of P. pachyrhizi resistance to mobile penetrant site-specicif fungicides. Multiple resistance is present in P. pachyrhizi to DMIs, QoIs and SDHIs and even so, these fungicides are applied in the largest area of soybean, without the multisite mixture, and thus, their efficacy has been reduced season after season. If the efficacy, which is already low, but has been reduced season after season, reaches < 30%, could the addition of multi-site revert the situation? Would multi-sites be used solo because they have superior control than site-specific? Considering the chemical control of ASR in Brazil, with the well-defined presence of cross and multiple resistance to site-specific, reflected in a constant sensitivity reduction evolution of P. pachyrhizi, season after season, we will reach a situation in which the most efficient control would be achieved with multisites solo? Therefore, would multisites withstand the enormous selection pressure for resistance? Let us remmember the development of P. infestans resistance to metalaxyl (in the 1977) and the solution given by the ready-made commercial mixture with mancozeb (in the 1980) would not be an indication that this would be the practice to be pursued in Brazil for economically sustainable control and fungicide with long effective life in controlling soybean rust? What has been the effective life of the metalaxyl + mancozeb mixture in controlling mildews and whether cases of mildew resistance to this mixture or similar ones has been reported? In the same direction, and similarly to ensure a long effective life in the control of P. pachyrhizi, the use of ready-made, liquid commercial mixture, containing IDM (prothioconazole and/or tebuconazole) and IQe (picoxystrobin and/or trifloxystrobin) + multisite (chlorothalonil), or mancozeb, or copper oxychloride) would be a solution? The exposure time of P. pachyrhizi to multi-site fungicide is still too short to make a judgment about their effective life.
v3-fos-license
2020-08-15T13:05:45.250Z
2020-08-15T00:00:00.000
221118164
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.clinph.2020.08.001", "pdf_hash": "a17f50057ded999b84c9b346fdd0f08f1541c6b7", "pdf_src": "Elsevier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41757", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a17f50057ded999b84c9b346fdd0f08f1541c6b7", "year": 2020 }
pes2o/s2orc
Ictal EEG source localization in focal epilepsy: Review and future perspectives Electroencephalographic (EEG) source imaging localizes the generators of neural activity in the brain. During presurgical epilepsy evaluation, EEG source imaging of interictal epileptiform discharges is an established tool to estimate the irritative zone. However, the origin of interictal activity can be partly or fully discordant with the origin of seizures. Therefore, source imaging based on ictal EEG data to determine the seizure onset zone can provide precious clinical information. In this descriptive review, we address the importance of localizing the seizure onset zone based on noninvasive EEG recordings as a complementary analysis that might reduce the burden of the presurgical evaluation. We identify three major challenges (low signal-to-noise ratio of the ictal EEG data, spread of ictal activity in the brain, and validation of the developed methods) and discuss practical solutions. We provide an extensive overview of the existing clinical studies to illustrate the potential clinical utility of EEG-based localization of the seizure onset zone. Finally, we conclude with future perspectives and the needs for translating ictal EEG source imaging into clinical practice. 1. The importance of seizure onset zone localization in the presurgical evaluation . The importance of seizure onset zone localization in the presurgical evaluation The ultimate goal of epilepsy treatment is to render a patient seizure-free without causing side-effects. In pharmacoresistant focal epilepsy, resective brain surgery is the treatment with highest efficacy. The multimodal presurgical evaluation aims to answer the question as to whether a patient can benefit from epilepsy surgery by delineating the so-called epileptogenic zone (EZ), i.e. the minimal brain region that needs to be removed to stop the seizures, and at distinguishing it from eloquent brain areas which must remain untouched (Ryvlin et al., 2014). First-line diagnostic methods: Noninvasive video-EEG, structural MRI, and neuropsychology During the presurgical evaluation process, a multitude of techniques is used to approach the EZ. Long-term videographic recording identifies seizure semiology, which, together with the patient's subjective experience, already gives important hints for seizure onset localization. Simultaneously, non-invasive (scalp) electroencephalogram (EEG) is recorded in the form of long-term combined video-EEG monitoring. The EEG is generated by potential differences across EEG electrodes over time, and conventional EEG analysis is based on visual inspection of waveform patterns. The EEG signature of seizures and interictal epileptiform discharges (IEDs) allow for a crude estimation of the seizure onset zone (SOZ) and the onset zone of IEDs, which is termed the irritative zone (IZ). Both the SOZ and, to a lesser extent, the IZ are taken as a surrogate for the conceptual EZ (Zijlmans et al., 2019). High-resolution structural magnetic resonance imaging (MRI) visualizes potential structural abnormalities in a patient's brain. The identification of a resectable brain lesion via MRI doubles the chance of postsurgical seizure-freedom (Tellez-Zenteno et al., 2010). In addition, neuropsychological testing aims at revealing localization-related brain dysfunctions. If seizure semiology, EEG, structural MRI and neuropsychology yield concordant results, and if the presumed EZ does not include eloquent cortex, the patient can directly proceed to surgery (Ryvlin et al., 2014). Second-line methods: Additional noninvasive diagnostic tools In case the basic workup leads to inconclusive or insufficient results, further non-invasive techniques can be added, and some of them are performed routinely in specific centers. First, the sen-sitivity of MRI can be increased by morphometric postprocessing. Second, nuclear imaging techniques are widely used to visualize metabolic correlates of epilepsy. While interictal positron emission tomography (PET) using fluorodeoxyglucose or other radioactive tracers visualizes chronic changes in brain metabolism, single photon emission computed tomography (SPECT) is performed ictally to visualize regionally altered brain perfusion during seizures. The SPECT tracer is administered as early as possible during a seizure, followed by postictal image acquisition (von Oertzen, 2018). Contrast and accuracy increase when images are subtracted from an interictal baseline and co-registered to MRI (Subtraction of Ictal SPECT Co-registered with MRI, SISCOM). Magnetencephalography (MEG) records magnetic fields over the scalp using magnetometers, and MEG data can be processed similarly to EEG data (Ryvlin et al., 2014;Zijlmans et al., 2019). A further diagnostic option is expanding the diagnostic yield of scalp EEG beyond the limits of conventional analysis (see below). While the EEG has a high temporal resolution in the order of milliseconds, its spatial resolution at the sensor level is low with only lobar or sub-lobar precision (Burle et al., 2015). It can be increased by using high-density (hd-) EEG recording systems with 64 or more electrodes. Although hd-EEG systems became more available worldwide, keeping hd-EEG caps in place for more than 24 hours can cause a patient discomfort, and it may be difficult to achieve good signal quality across all electrodes for a long time. Furthermore, EEG suffers from the volume conduction problem: Potentials at a certain brain region are recorded by all electrodes simultaneously. Given the orientation of the gray matter generating the potential, the brain activity can be seen at distant electrodes. EEG source imaging (ESI) helps to overcome this problem. Based on mathematical models, ESI estimates the localization of scalp-recorded potentials and plots the sources of cerebral activity within a 3D model of the brain (see Section 3.3). Although not widely used yet, ESI of interictal epileptic activity (''spikes") has started to find its way into presurgical epilepsy evaluation (Mouthaan et al., 2016). The reported sensitivities to successfully localize the EZ in the presurgical evaluation range from 30% to 90% for interictal PET (70-90% in temporal lobe epilepsy (TLE), 30-60% in extratemporal lobe epilepsy (ETLE)) and from 66% to 97% for ictal SPECT (Knowlton, 2006;la Fougère et al., 2009). A study based on 152 patients (102 TLE, 50 ETLE) accounted for a sensitivity of 69% and a specificity of 44% for PET, 58% and 47% for SPECT, and 76% and 53% for structural MRI (Brodbeck et al., 2011). By comparison, interictal ESI had a sensitivity and specificity of 84% and 88% in case of hd-EEG. These numbers dropped to 66% and 54%, respectively, when using low-density EEG with no more than 32 electrodes. MEG source imaging (MSI) of IEDs achieves sensitivity values between 55 and 90% (Knowlton et al., 2008;Stefan et al., 2011;Kasper et al., 2018;Rampp et al., 2019). Recently, a sensitivity and specificity of 79% and 75% were reported for ESI of automatically detected spikes in long-term low-density EEG . Clinical interpretation of automatically generated source localization reports from long-term EEG increased the sensitivity to localize the EZ to 88% (Baroumand et al., 2018). In addition, interictal ESI and MSI have recently been shown to provide valuable information for tailoring individual invasive EEG monitoring (Duez et al., 2019). Third-line investigations: Intracranial EEG recordings Even additional non-invasive methods might not be sufficient to reliably delineate the SOZ. In such cases, invasive EEG (IEEG) becomes necessary. Subdural electrodes are placed onto the cortex (electrocorticography), while depth electrodes are inserted into the brain tissue itself (stereo-EEG). Other than scalp EEG, IEEG allows recording seizure activity directly from its origin, as long as the electrodes have been placed correctly. On the other hand, the spatial sampling of IEEG is comparably low, since only well-chosen parts of the brain can be assessed. Due to its invasive nature, IEEG bears a 0.5% risk of major complications (Jayakar et al., 2016). Therefore, ideally, results from the non-invasive diagnostic methods help avoiding intracranial EEG or at least aid in selecting candidate regions for IEEG as accurately as possible. Why ictal ESI? Overall, despite the widely available ictal EEG recordings in most patients, neuroimaging during presurgical evaluation (except for ictal SPECT) relies on interictal epileptic activity. However, epilepsy surgery aims at eliminating the origin of seizures and not of IEDs, and SOZ and IZ are not necessarily concordant (Hamer et al., 1999;Bartolomei et al., 2016). Therefore, it is of high clinical value to localize the sources of seizures complementary to those of interictal epileptic activity. Ictal ESI promises to provide more accurate and/or objective interpretation of scalp EEG compared to visual inspection, and to add useful localization information that can guide resection or placement of intracranial EEG electrodes. In this review, we focus on what are the challenges of ictal ESI, how they can be overcome, what the performance is to estimate the EZ, and what is needed to bring ictal ESI into the clinical practice. How to tackle issues and challenges in ictal EEG source imaging There are several challenges to localize the SOZ from scalp EEG during presurgical workup. The quality of ictal EEG data is often impaired by movement, muscle, and eye artifacts, and the ictal activity can propagate during a seizure. Furthermore, choosing a suitable validation strategy for ictal ESI is not trivial. Below, we address established ways to tackle these problems. Dealing with artifacts in ictal EEG EEG recorded during a seizure is often of low quality due to clinical manifestations of the seizure. Muscle, movement and/or eye artifacts can significantly reduce the EEG's signal-to-noise ratio (SNR). Since it is not possible to prevent the occurrence of these artifacts, strategies have been developed to cope with or to enhance the low SNR. As a first step in most analyses, data are pre-processed, usually involving a band-pass filter to reduce baseline drift (low frequencies) and muscle artifacts (high frequencies), and a notch filter at 50 or 60 Hz to reduce the power line noise. Additional techniques such as principal or independent component analysis (PCA or ICA) can remove eye blinks or cardiac artifacts from the EEG. Once the EEG is preprocessed, one strategy is to manually select ictal time points with high SNR for EEG source localization. For instance, a single time-point or a short EEG epoch close to the seizure onset may be chosen that contains as little artifact as possible. Repetitive ictal EEG events such as spikes or rhythmic waveforms during seizure onset can be averaged to further increase the SNR. Unfortunately, ictal events may be non-uniform and seizures may spread rapidly. Therefore, the average is often restricted to a small number of events which sets limits to SNR enhancement (Boon et al., 1997a). Another strategy is scalp voltage map analysis, also called topographic analysis, of the ictal EEG. A specific scalp EEG topography that reflects the ictal EEG activity is extracted and used as input for source localization. One way to extract the ictal EEG topography is spectral analysis. The frequency band of interest can be chosen manually so that it includes the rhythmic seizure activity of choice and, at the same time, excludes physiological activity at other frequencies. Then, the power at each electrode in this frequency band is computed. Defining the frequency band upfront can be tricky since seizure activity can arise at multiple frequencies simultaneously, and the choice is, to a certain extent, subjective. Recently, an automated technique has been proposed to determine the most dominant rhythmic EEG pattern within the earliest ictal activity and its corresponding topography (Koren et al., 2018). An alternative way for ictal scalp topography extraction is using decomposition techniques such as PCA, ICA or tensor decomposition. The EEG is divided into components with specific spatial and temporal signatures, and in case of tensor decomposition also with a spectral signature. From these components, the one which represents best the seizure activity is selected for source localization. As a potential limitation, decomposition methods depend on assumptions. For example, PCA assumes orthogonality between neural activities and artifacts, while ICA assumes that the components are mutually statistically independent. Although this sounds logical in theory, in practice the story is more complicated. Ictal activity may be smeared into multiple components, or other unwanted activity, such as muscular artifacts, can remain in the ictal component. This can make the selection of single components for source localization cumbersome and sometimes even impossible. Visualizing the spread of ictal activity Ictal activity can rapidly spread through the brain and several cortical regions can become active during a seizure as part of the patient's individual epileptic network. Therefore, in addition to ESI that estimates the brain region that is most active during a seizure, connectivity analyses study interactions of brain regions within the network. Functional connectivity assesses activations within the network which are correlated, for example via functional MRI. In addition, effective connectivity also estimates causality, in order to distinguish the main driver(s) of the epileptic network from regions which are secondarily activated (Spencer, 2002;Richardson, 2012). The main epileptic drivers are thought to represent the seizure onset zone more accurately than brain sources of maximum ictal activity. Choosing the best validation strategy Once an innovative diagnostic technique such as ictal ESI is developed, the final challenge is to find a validation strategy to optimally quantify the method's performance. Basically, ESI can be validated via concordance with a 'ground truth'. One obvious ground truth for ictal ESI could be the SOZ defined by IEEG (Megevand et al., 2014). Ideally, ictal ESI results could be compared to simultaneously recorded IEEG. However, only very few centers perform simultaneous scalp and intracranial recordings for research purposes, and application of ESI on simultaneously recorded seizures comes with additional challenges. Intracranial electrodes, burr holes and bone flaps can influence the propagation of the electric field and distort scalp voltage topography and ESI. Indeed, the non-conductive part of subdural grids has been shown to attenuate scalp potentials of generators located beneath (van Mierlo et al., 2014). If instead scalp EEG and IEEG are recorded separately, one cannot guarantee that seizures originated from the same localization and extent. Even more, IEEG does not necessary sample the true or full SOZ. If invasive electrodes were placed close to the SOZ but not inside it, the tissue beneath the electrodes closest to the real SOZ would be incorrectly considered to harbor the SOZ. Thus, as an alternative, the ground truth can be based on the outcome of the complete presurgical evaluation or, which appears to be the best solution, on the resected brain area combined with the patient's postsurgical seizure outcome (Staljanssens et al., 2017a(Staljanssens et al., , 2017bKoren et al., 2018). If a patient is seizure-free after surgery, the EZ must have been harbored in the resected tissue. Still, since the extent of the resected brain area is a compromise between the risk of functional loss for larger resections and the risk of not being seizure-free for smaller resections, the area that was delineated during presurgical evaluation or that was eventually resected is often larger than the SOZ. Another way to investigate clinical usefulness is estimating the added value of ictal ESI for the presurgical evaluation process (Boon et al., 2002;Koren et al., 2018), to see if changes are made to diagnostic and therapeutic management. Other studies validated their ESI results by comparison to ictal SPECT (Hallez et al., 2009;Yang et al., 2011;Habib et al., 2016) or structural MRI results (Ding et al., 2007;Kovac et al., 2014) which has limitations when the SPECT shows ictal spread or when the MRI-identified lesion is not the cause of epilepsy. When reviewing the concordance of a specific analysis, spatial accuracy criteria are critical. Some studies report absolute distances, with the limitation that short distances cannot rule out lateralization in the wrong hemisphere. Sub-lobar and lobar concordance (with or without an atlas-based parcellation) or a margin of tolerance around the reference standard, or variable distinction between full/partial concordance are also frequently and variably used, which should be accounted for in the evaluation. Literature search criteria In the next sections we will provide an overview on all clinical studies on ictal ESI in humans that were published until March 31, 2020, in English. PubMed was searched using the following search string: ( We will discuss their findings and how the authors tackled the aforementioned challenges. For all of them, Table 1 lists the number of included patients, the number of EEG electrodes, details about the ESI approach (especially head model and inverse solution), the method to estimate the SOZ, and the validation approach at hand. In total, 991 patients were evaluated, 661 of whom had TLE (67%), while 261 had ETLE (26%) and 69 (7%) were not distinguishable. Seventeen studies used hd-EEG recordings while the remaining 50 relied on low-density EEG. Twelve studies used subsequent connectivity analysis. All of them will be reviewed in the following. Pioneering studies While interictal ESI was introduced in the 1970s, the first ictal ESI studies were published in the 1990s. Meant as proofs-ofprinciple, they modelled orientations rather than locations of single or very few dipoles from narrow-band filtered early ictal discharges. Among the first was Ebersole who applied ESI on ictal EEG of 17 patients with TLE (Ebersole, 1994). About the same time, Boon and D'Havé modeled dipole localization and orientation of non-averaged early ictal discharges in 15 presurgical patients with MR-lesional focal epilepsy (Boon and D'Havé, 1995) (Fig. 1). They found that ictal dipoles corresponded well with the respective interictal dipoles, and replicated their findings in larger cohorts of 33 patients (Boon et al., 1997a) and 41 patients (Boon et al., 1999). While in these three studies, only a minority of patients had IEEG to validate the ictal source localization, in a fourth study, all 11 patients had IEEG, and ictal ESI results were congruent to the IEEG-defined SOZ in all of them (Boon et al., 1997b). Consistently across these early works, most stable dipoles were found in case of mesial TLE, and most examined patients had this type of epilepsy. Other authors focused on the temporal lobe specifically. In 40 TLE patients who later underwent successful temporal epilepsy surgery, Assaf and Ebersole projected ictal EEG into 19 predefined cortical regions, 4 of those in each temporal lobe (Assaf and Ebersole, 1997). By comparing the most prominent source at the earliest recognizable ictal rhythm to the SOZ identified by IEEG, they could reliably differentiate temporal lobe seizures of mesiobasal origin from those of lateral neocortical origin. Using the same approach, the authors successfully correlated ictal source localization to surgical outcome in 75 patients with anteromesial temporal lobectomy (Assaf and Ebersole, 1999). -Around the same time, Mine et al. applied ESI to a 10-ms window around the peak of an early ictal discharge in one TLE and one ETLE patient (Mine et al., 1998). In an extension study, the authors were able to discriminate mesial from lateral temporal sources, other than by visual interpretation of the scalp EEG (Mine et al., 2005). Spatial resolutions were limited to large sub-lobar regions, and comparison with interictal ESI was not performed. These early studies showed that ictal ESI is feasible and can have an added value in the presurgical evaluation of epilepsy. The methods used were straightforward, the spatial resolution limited, and the validation often solely qualitative or descriptive, meaning that no quantitative distances to the lesion, to IEEG Abbreviations. 3-shell/layer = scalp, skull, brain; 4-tissue/layer = scalp, skull, brain, cerebrospinal fluid (CSF); 5-tissue = scalp, skull, CSF, grey matter, white matter; 6-tissue = scalp, skull, CSF, grey matter, white matter, air; 7tissue = scalp, skull, CSF, grey matter, white matter, air, eyeball; electrodes with highest ictal activity, or to the resection were given. Around the year 2000, the first papers were published using more sophisticated methodological approaches like frequency-specific analysis (Lantz et al., 1999;Blanke et al., 2000) or decomposition analysis (Kobayashi et al., 2000;Lantz et al., 2001). These methods were validated in 3-15 patients per study with promising results (for detailed descriptions, see Sections 3.7 and 3.8). Increasing computational power also allowed for higher spatial resolution of ESI and more quantitative and rigorous validation. Merlet and Gotman were the first to use a quantitative validation approach by calculating the distance between the estimated source of ESI and the most active electrode of simultaneous IEEG (Merlet and Gotman, 2001). Ictal ESI was possible in 6 of 15 patients only. In 4 of these, the distance to the most active IEEG contact was smaller than 15 mm. The authors found that mesial temporal ictal discharges were depictable for ESI but invisible for conventional EEG inspection, other than seizure spread to lateral temporal or frontal areas. Although the study showed that at that time, ictal dipole modelling was feasible for a minority of seizures only, it proved that quantitative validation of ictal ESI is possible. Influence of different head models and source imaging techniques ESI algorithms must solve two bioelectrical problems, the forward and the inverse problem. The forward problem is, which scalp voltage distribution results from a given cerebral source activity? To solve it, the anatomy and the different volume conduction properties of the head's tissues and compartments need to be modelled as realistically as possible. Early forward models (or: head models) consisted of 3 concentric spheres representing brain, skull, and scalp, while nowadays, realistic head models with up to 7 compartments modelling the individual anatomy are in use. The inverse problem is, which cerebral source activity results from a given scalp voltage distribution? The number of solutions for the inverse problem is infinite. Thus, inverse models include various assumptions and constraints to lead to meaningful ESI results. There are two main types of inverse solutions: equivalent/single dipole models and distributed source models. Single dipole models aim at defining location and/or orientation of single dipoles that best explain the EEG signal. Such dipoles represent a ''center of mass" in a patch of activated cortex; however, such a center of mass is often localized in the white matter if the solution space is not restricted to the cortex. Single dipoles have no spatial extent; thus, the size of a source cannot be estimated. As an advantage, the orientation of the dipole carries important information (see Fig. 1). Instead, distributed inverse solutions are based on regular 3D grids of up to thousands of solution points throughout the brain or gray matter. Patches of gradually activated sources are displayed; therefore, the maximum of activation is usually taken for validation (see Fig. 2). Localization accuracy of distributed inverse solutions is intrinsically limited by the distance between solution points (usually around 5 mm) so that concordances can be formally expressed as multiples of inter-point distances only. Until the 1990s, ictal ESI studies relied on source models of single or very few dipoles. Later, distributed inverse solution techniques like multiple signal classification (MUSIC) or lowresolution electromagnetic tomography (LORETA) were increasingly used. Around the year 2000, Herrendorf, Waberski and colleagues aimed at comparing different head models and inverse solutions for ESI of interictal epileptic discharges. By chance, they recorded a seizure in one patient with mesial TLE during a shortduration standard EEG. Following ESI with their realistic individual head model, this patient's ictal source located 12 mm remote from the interictal source . When they compared six different combinations of forward and inverse models including MUSIC and a current density reconstruction approach, ictal and interictal findings of the same patient were highly similar and concorded with the area of successful resection on a lobar level . A couple of years later, Beniczky et al. systematically tested MUSIC to localize ictal EEG activity across 10 patients with TLE (Beniczky et al., 2006). In 8 patients, ictal source maxima were in the same area as SPECT-detected hyperperfusion during the same seizure. As a limitation, ictal SPECT hyperperfusion clusters have low temporal resolutions and often display not only the ictal onset zone but also regions of ictal propagation (van Paesschen et al., 2007). In 2009, Lee et al. were the first to use the LORETA algorithm for ictal ESI (Lee et al., 2009). Initial ictal discharges of 22 patients with subsequent successful temporal resections were divided into three different frequency bands and compared to baseline EEG recordings. Different scalp EEG patterns led to distinct patterns of source activation, with solutions of the 5-9 Hz frequency band corresponding best to the resected areas. Ten years later, Rullmann et al. showed that different ESI techniques applied on a single patient's averaged ictal peak led to similar localizations close to a MRI lesion (Rullmann et al., 2009). They found it crucial that the forward model correctly modelled CSF and skull. Of note, segmentation of the different tissues in patients with large brain lesions may be prone to mistakes (Birot et al., 2014). -Another study in 10 patients found more pronounced differences between the applied inverse techniques (Koessler et al., 2010). The authors reported sub-lobar concordance with IEEG in 9/10 patients using single dipolar methods and in 5-7/10 patients using distributed inverse techniques. Later, Habib et al. found ictal SPECT foci to correspond well to results of several distributed inverse techniques, namely weighted minimum norm estimates (WMNE), dynamic statistical parametric mapping (dSPM) and standardized low resolution tomography (sLORETA) in 8 patients (Habib et al., 2016). Kovac et al. tested different inverse solutions on ictal EEG patterns that were non-lateralizing during visual inspection (Kovac et al., 2014). ESI of 17 seizures of 8 patients was compared to frontal MRI lesions. Ictal ESI clearly lateralized the EEG patterns in 47% of patients using a single dipole approach (75% correct) and in 29% of patients using distributed solutions (60-80% correct). This emphasized the usefulness of ictal ESI in case of non-lateralizing visual EEG interpretation. Nevertheless, all these studies were certainly underpowered to detect statistically significant and meaningful differences in the performance of various forward and inverse models. Beniczky et al. applied five different inverse solutions on averaged ictal onset waveforms of 38 seizures in 22 patients obtained with 64-channel EEG (Beniczky et al., 2016). In 13 of their patients, all inverse methods agreed on the localization on a sublobar level, and in another 6 patients, all but one technique agreed. A distributed technique called Cortical Classical LORETA Analysis Recursively Applied (CLARA) and dipole fitting yielded the highest accuracy. These concorded with the resected zone (RZ) in 13 of those 14 patients who became seizure-free after surgery. This study confirmed that different techniques lead to similar source localization in most patients, but patient-specific differences may occur. Thus, the use of different approaches and assessment of their concordance can lead to more robust results. Lately, Sharma et al. compared ictal and interictal ESI using a single dipole and a distributed source model in 87 consecutive patients (Sharma et al., 2018). Of these, 84 had seizures during long-term low-density EEG. Ictal ESI using the single-dipole method yielded meaningful results in 79 cases, significantly more than the distributed source model (n = 69). In 47 patients with surgery and 12-month follow-up, localization accuracies across all types of ESI were similarly high (51-62%) and not significantly different from that of other methods (e.g., MRI, 55%). These results, obtained in a large patient cohort, underline that modern stateof-the-art ictal ESI is feasible and accurate in most patients undergoing presurgical epilepsy evaluation. Added value of high-density EEG In interictal ESI, accuracy was shown to improve when the spatial sampling of scalp EEG was increased through the use of hd-EEG setups containing 64-256 electrodes (Brodbeck et al., 2011). The first study on ictal (128-)256-channel ESI was published in 2010 (Holmes et al., 2010). The authors applied ESI on early ictal EEG epochs of 10 patients and found lobar concordance with IEEG results in 8 of them. More recently, using 64-channel EEG, Akdeniz found ictal ESI concordant with the RZ in 13/13 patients who were seizure-free after surgery, whereas for two patients with rather unfavorable surgical outcome, the SOZ estimation pointed at a region adjacent to the RZ (Akdeniz, 2016). -Nemtsas et al. evaluated 256-channel EEG recordings of 14 patients (Nemtsas et al., 2017). In 6 patients with subsequent successful surgery, the maximum of the averaged source map of each seizure was concordant with the RZ, but it was discordant in 2 patients with unfavorable seizure outcome. Furthermore, ictal ESI corresponded to interictal ESI in 5/6 non-operated patients. Recently, Plummer et al. combined and compared hd-EEG and MEG for source imaging in epilepsy (Plummer et al., 2019). They recorded seizures in 11/13 sleep-deprived epilepsy patients using simultaneous hd-EEG/MEG. Across a total of 33 seizures, 25 were visible in both modalities, 7 were visible in EEG only, and one was visible in MEG only. Twenty-four of 33 were localizable by ESI, 14/33 by MSI, and 25/33 by combined EMSI. Ictal hd-ESI had higher agreements with subsequent surgical RZs than ictal MSI; both for ictal and interictal discharges, most accurate results came from very early time points. Other than previously expected by the authors, the combination of independent hd-ESI and MSI analyses outperformed spatially combined EMSI. The superior accuracy of ictal hd-ESI results, on top of the difficulty of MEG to record ictal events, strengthen the potential clinical usefulness of hd-EEG for SOZ localization. To date, there are three studies formally comparing high and low electrode sampling. Kuo et al. assessed accuracy of ictal vs. interictal ESI based on 256-channel EEG vs. routine EEG (10-20 system) (Kuo et al., 2018). Out of 10 patients who had epileptogenic MRI lesions (n = 7), IEEG (n = 6) and/or resective surgery (n = 6), hd-EEG yielded sublobar concordance with the clinical findings in 9/10 cases for ictal ESI and in 6/10 for interictal ESI. For comparison, low-density ESI of interictal discharges was concordant in 4/10 and of ictal discharges in another 4/10 cases. These results indicate that the use of hd-EEG might be of additional value if recordings can be sufficiently long to capture seizures. -Staljanssens et al. (Staljanssens et al., 2017a) and Lu et al. (Lu et al., 2012b) both applied ESI and subsequent connectivity analyses on ictal hd-EEG (see Section 3.9). Both groups consistently found that spatial down-sampling of EEG to fewer electrodes increased the localization error of the respective connectivity approach. Further studies based on hd-EEG will be reviewed in the following sections (see also Table 1). As a limitation of hd-EEG, duration of recordings is limited to about 24 hours due to technical aspects and potential discomfort to the patient. Therefore, ictal hd-EEG can usually be recorded in patients with very frequent seizures only or by pure chance. Constraining the solution space based on previous assumptions The source space for ESI is normally constrained to the whole brain, the cerebrum, the gray matter, or the cerebral cortex only. Still, it can be restricted further. To compare sources of ictal activity in favorable vs. unfavorable surgical outcome in mesial TLE, Breedlove et al. restricted the solution space to the temporal lobe (Breedlove et al., 2014). In 10 patients per group, one representative seizure per patient was analyzed with a voxel-based inverse solution. Patients with poor surgical outcome had a broader distri-bution of ictal sources, beyond the limits of the anterior temporal lobe resections, in comparison to those who became seizure-free. The authors concluded that unfavorable surgical outcome in mesial TLE seems correlated with a more widespread epileptogenic network. Nevertheless, they admitted that by restricting the sources to the temporal lobes, extratemporal activation may have been missed. Two retrospective pilot studies constrained the source space further to areas pre-defined by other imaging methods. Peters et al. restricted the source solution to MRI-identified multiple tubers in 6 children with tuberous sclerosis complex (Peters et al., 2019). This increased the sensitivity of ictal ESI from 30% to 100%, with specificities of 100%. Using SPECT in 5 cases of MRI-negative epilepsy, Batista García-Ramó et al. constrained the source space to areas of ictal hyperperfusion (Batista Garcia-Ramo et al., 2019). Such SPECT-informed ESI concorded better with the RZ than ESI alone in two TLE patients with favorable surgical outcome, but not in 1 ETLE patient with unfavorable outcome; two more patients had multifocal results and did not proceed to surgery. Unfortunately, numbers of EEG electrodes and criteria for epoch selection were not detailed, and the solution constrained by non-gold-standard techniques may bias the results towards inaccurate solutions. Ictal ESI in special constellations Some authors investigated the value of ESI in case of misleading ictal EEG waveform patterns. For example, Catarino et al. studied paradoxical lateralization of ictal EEG (Catarino et al., 2012). In 4 patients with ETLE, visual EEG analysis showed maxima contralateral to the clinically defined epileptogenic focus. In two of them, the ictal EEG could be analyzed with ESI, and in one of those, the source located in the correct hemisphere. In the other case, the source was apparently in the wrong hemisphere as well, and for the two remaining patients, ictal ESI was not possible for reasons not explained. In sum, ictal ESI was able to ''correct" the paradoxical EEG lateralization in one out of four patients only. Around the same time, Elwan et al. applied ESI on ictal EEG patterns with ''pseudo-temporal" maxima in 10 patients with confirmed ETLE (Elwan et al., 2013). One seizure per patient was analyzed, selection criteria were not detailed. The ictal source located falsely inside the temporal lobe in 7 patients and was not localizable in the 3 remaining patients. As control patients, 9 out of 12 subjects with mesial TLE had ictal sources within the temporal lobe, and only 2 out of 11 with neocortical TLE. All patients later underwent successful epilepsy surgery. The authors concluded that ESI failed to differentiate pseudo-temporal ictal EEG patterns from ''truly" temporal discharges. In order to integrate non-invasive multimodal data for presurgical epilepsy evaluation, Neal et al. developed an algorithm to co-register non-concurrent scalp EEG, resting-state fMRI, and diffusion tensor MRI to create individual 3D brain network maps (Neal et al., 2018). In a preliminary validation study, resting-state fMRI findings were co-registered to ictal ESI results in one patient with unilateral TLE and another with bilateral TLE, and compared to the fMRI resting network of a healthy control subject. The authors found symmetric network representations in the healthy control person and the person with bilateral TLE, and an asymmetric network in the patient with unilateral TLE. Further validation studies with larger patient numbers and longitudinal follow-ups of individual patients were announced. Spectral analysis of EEG data In the studies presented above, ictal discharges or epochs were chosen manually and, thus, at least to a certain extent, arbitrarily. In what follows, more advanced analysis techniques were used to extract scalp voltage maps (voltage topography) from the ictal EEG prior to source localization. These analyses are based on spectral analysis (time-frequency data) or mathematical decomposition methods of the signal across time (independent components, principal components) to identify meaningful ictal components and to limit the contamination by artifacts. Time-frequency plots of ictal data using spectral analysis allows isolating the ictal generator from other EEG signals such as biological and technical artifacts and non-epileptic brain activity. Already in the late 1990s, Lantz et al. transformed 2-seconds epochs at seizure onset into the frequency domain using fast Fourier transformation (FFT), and the dipoles were modeled for the dominant seizure frequencies (Lantz et al., 1999). The obtained dipoles were found homogeneous across 7 patients with mesial TLE. Using a modification of this approach, the source of the dominant ictal rhythm was correctly lateralized in 9 out of 10 patients with successful partial temporal lobectomy (Blanke et al., 2000). As an alternative to FFT dipole approximations, authors from the same group proposed a non-stationary distributed source approximation and applied it on frontal lobe seizures of one patient (Gonzalez Andino et al., 2001). Worrell et al. used spectral analysis to determine a scalp voltage map that best described the measured scalp potentials at the seizure frequency during a 3-seconds epoch at seizure onset . Source localization of the extracted scalp voltage map concorded with the symptomatic brain lesion in 9/10 patients on a lobar level. Twelve years later, Bersagliere et al. applied ESI on EEG of sleeprelated frontal lobe seizures (Bersagliere et al., 2013). To avoid the heavy movement artifacts associated with these hyperkinetic seizures, the authors defined the 5 seconds before clinical seizure onset as the early ictal period and compared it to a pre-ictal epoch which was at least 13 seconds before clinical seizure onset. Following spectral analysis, they located the sources of delta (1-4 Hz) and sigma activity (12-16 Hz spindles). While interictal and ictal EEG patterns per se were of no lateralizing value, the average source maximum of sigma activity lateralized correctly in all four patients who subsequently underwent successful surgical resections, while the source of delta activity lateralized correctly in one patient only. However, in two of the patients, left-right differences for sources of sigma activity were only marginal. At the same time, Beniczky et al. validated the diagnostic accuracy of ictal ESI in a blinded study design according to the STARD criteria (Beniczky et al., 2013). STARD, ''standards for the reporting of diagnostic accuracy studies", is a rigorous approach to compare the accuracy of an index test to a reference test (Bossuyt et al., 2015). Forty-two consecutive patients fulfilled the inclusion criteria, and for 33 of them, the epilepsy team agreed on a clinical reference standard. For each patient and each seizure type, a timewindow was defined using time-frequency plots and a voltage map was created for every time-point of an averaged ictal waveform. ESI was performed for the voltage map distributions at onset of discharge (Fig. 2). Estimated by sub-lobar concordance with the reference standard, ictal ESI achieved a sensitivity and specificity of 70% and 76%, respectively. Twenty patients underwent surgery and 16 patients became seizure free, resulting in a positive predictive value (PPV) of 92% and a negative predictive value (NPV) of 43%. A couple of years later, using a similar approach but including ICA, authors from the same group localized the seizure onset zone of self-limiting epilepsy with centro-temporal spikes to the operculo-insular region in three patients (Alving et al., 2017). In 2018, also in concordance with STARD, Koren et al. investigated the performance of an automatic algorithm for ictal onset source localization in 28 consecutive patients (Koren et al., 2018). The algorithm detected the most dominant rhythmic EEG pattern within the earliest ictal activity, its corresponding spatial topography was used for source localization in a template MRI, and this localization was qualitatively compared with the resection on a sub-lobar level. The automated method achieved a sensitivity of 92.3% and a specificity of 60%. As limitations, only 3/28 patients had ETLE, and sources were localized in a template model, making the method less suitable to be applied in patients with brain lesions or with a special head geometry. Decomposition analyses The first study using decomposition analysis on ictal EEG was published in 2000 by Kobayashi et al. (Kobayashi et al., 2000). Ten seizures of 3 patients with TLE were investigated, and in every case, a source corresponding to IEEG seizure activity was found, even when seizure activity was not visually identifiable in the scalp EEG nor by spectral analysis. This indicated that decomposition techniques can extract ictal EEG information that is not apparent by visual or spectral analysis. -Lantz et al. decomposed ictal EEG based on peaks of seizure activity identified by the EEG global field power . They found a dominant topography in 7/9 patients, and in 6 of these 7, source localization of this topography qualitatively corresponded to IEEG results. In 2006, Leal et al. applied ESI on EEG recordings of gelastic seizures (Leal et al., 2006). Gelastic seizures are typically caused by hamartomas in the hypothalamus, a deep brain region far from the scalp, and visual analysis of ictal EEG is usually of little diagnostic value. Following ICA, the authors found a rhythmical component that indicated a hypothalamic source in 3 out of 3 patients. Interestingly, two of the patients had additional components that occurred later and were compatible with more superficial sources (Fig. 3). -Two years later, the same team of authors applied the same method to ictal EEG of 4 patients with tuberous sclerosis (Leal et al., 2008). Sources of the identified rhythmical components were nearer to the epileptogenic tuber than sources of interictal discharges. Jung et al. used an ICA-based approach to study propagation of ictal onset patterns in 12 patients with TLE . During the first 10 s of averaged seizures, dipole sources located primarily in the mesial temporal and the medial frontal lobe, while at 20-40 s, the lateral anterior temporal lobe and the basal ganglia were involved as well. On a group level, two different ictal onset EEG waveform patterns, regular theta activity vs. irregular delta activity, had distinct propagation patterns. These findings are of pathophysiological relevance, but patients with prototypical ictal onset patterns only were included into the study. Stern et al. decomposed early ictal 5-second epochs with principal component analysis (PCA) to perform ESI with the distributed inverse solution LORETA (Stern et al., 2009). Among the principal components, they visually identified the ictal component and chose the most robust local maximum following LORETA. For each of 5 successfully operated patients with TLE, the identified source was consistent across up to 3 seizures, indicating a high reliability of the findings. However, only in 2 of the 5 patients, the located source was actually inside the temporal lobe. ictal EEG activity recorded with direct currentcoupled electrode setups (Miller et al., 2007). Five of their 11 patients had successful epilepsy surgery and the sources of infraslow activity concorded to the resection sites on a lobar level. PCA was performed in specific seizures with a clear lowfrequency signal; however, the authors found ESI of the identified dominant components of little value to study infraslow ictal shifts. Still, decomposition techniques may help to clean artifactual EEG data. When Hallez et al. identified muscle and eye artifacts via blind source separation or ICA and removed them from ictal EEG, ESI improved in 5 of 8 patients, meaning that distances to a region of ictal SPECT activation became smaller than in ESI based on the raw EEG (Hallez et al., 2009). Nevertheless, median distances between the source estimations and ictal SPECT were larger than 20 mm in 7/8 patients. Seven years later, Li et al. based ictal ESI on raw EEG vs. cross-frequency coupled potentials (Li et al., 2016). The latter were calculated from the phase feature of low frequency rhythms and the amplitude feature of high frequency rhythms. Across 7 patients with favorable surgical outcome, 14 of 17 seizures were localized correctly, compared to 0 out of 17 based on raw EEG. As a limitation, the authors did not describe durations of postsurgical follow-up, and one of the patients with presumably favorable outcome died a few weeks after the surgery. Still, the approach seemed effective to extract brain signals from artifacted EEG data. Another interesting approach for SOZ localization is to first decompose the ictal EEG data to isolate seizure components, perform ESI on each component and then integrate the ESI results. Fig. 3. Source localization following independent component analysis (ICA) on gelastic seizure EEG in a patient with hypothalamic harmatoma. A: Representative EEG of a gelastic seizure (arrow: seizure onset). Note that by visual inspection, only a diffuse rhythmical delta activity can be perceived without localizing information. B: Time course of three rhythmical independent components identified via ICA from 3 merged gelastic seizures. C: Graphical representation of the spectral changes from À30 s to +30 s around seizure onset. An increase in 1-2 Hz is first seen in component 2. D: Best fit dipole solutions for the independent components. Deep sources are obtained for components 1 and 2, and a superficial source for component 3. Modified from (Leal et al., 2006). Using such a dynamic seizure imaging (DSI) technique on 76electrode EEG, Yang et al. identified the SOZ in good correlation with the successfully resected zone or the SOZ as defined by IEEG/SPECT imaging methods in 17 seizures of 8 patients (Yang et al., 2011). The DSI method colocalized with the ground truth in 14 seizures and had a mean localization error of 10 mm in the remaining 3 seizures. For comparison, direct source imaging of the raw EEG data had a mean localization error of 28 mm. In a follow-up study, Lu et al. applied the same approach on 32channel EEG (Lu et al., 2012a). In 7 out of 9 pediatric patients, the SOZ was estimated within the resected brain area, and in the remaining two patients, it was at least close. However, 4 patients had an unfavorable postsurgical outcome, so the RZ did not include the ''true" EZ in these cases. Still, DSI corresponded well with IEEG that was recorded in 7 patients, also for localizing multiple foci during later seizure propagation. Despotovic et al. applied decomposition methods on seizure EEG of neonates with perinatal hypoxic brain lesions (Despotovic et al., 2012). They proposed a technique based on atlas-free segmentation and a brain extraction algorithm to construct neonatal head models including scalp, skull (with modelling of the fontanelle), CSF and brain. Forty-five focal seizures of 10 neonates were studied. In 9/10 patients, all seizures localized within 15 mm to a border of the MRI lesion, most of those even within 5 mm. Instead of decomposing EEG data in the sensor space, Pellegrino et al. applied decomposition analysis on spatiotemporal source maps (source space) obtained from simultaneous MEG/EEG recordings in a heterogeneous population including TLE and ETLE patients (Pellegrino et al., 2016). On a sub-lobar level, the most active source component was concordant in 9/14 seizures or 6/8 patients. The median distance of the source map maximum to the clinically defined SOZ based on IEEG was 11 mm. Ictal MSI was more accurate than ictal ESI which can be partly due to the difference in spatial sampling, given that MSI used 275 MEG gradiometers while ESI only used 54 EEG electrodes. Usually, ictal MEG data are rare because of limited MEG recording times. In order to entirely avoid the averaging of ictal EEG epochs which may lead to loss of information, Erem et al. proposed a dynamic ESI approach that explicitly models inter-epoch variation (Erem et al., 2017). Ictal EEG data are divided into series of consecutive short epochs which do not need to contain rhythmic activity, and dynamic ESI is applied on the epoch collections. The feasibility of the approach was tested in simulated data as well as in real data of four patients. Two of these had a second epilepsy surgery to extend a first, unsuccessful anterior temporal resection, and dynamic ESI localized the SOZ in the area resected during the second surgery. However, regarding the defects of brain and skull caused by the first surgery, these patients may not have been ideal candidates for ESI. To reduce the impact of human judgement on ictal component selection, Habib et al. proposed a recursive ICA approach to eliminate unwanted components from ictal ESI (Habib et al., 2020). Noise was eliminated in each recursive cycle to obtain a single independent component containing the ictal EEG signal. In 24 seizures of 8 epilepsy patients (7 TLE) who subsequently underwent successful surgical resections, the approach was able to identify such a ''best ictal component", and the source concorded with the resection in 20 of these. When applied on simulated datasets, the recursive ICA approach led to more accurate ESI results than ESI based on visual inspection of the EEG. The average time to compute the recursive ICA on a laptop computer was 13 minutes which seems acceptable for clinical practice. Additional connectivity analysis All studies presented until here focused on the sources with maximum power as the estimate of ictal onset. This means, they did not take into account that epilepsy is a network disease and that connectivity between regions is likely to play an important role at seizure onset. In the following, we will give an overview on ictal ESI studies with additional functional connectivity analyses. Functional connectivity describes the temporal correlation of neuronal activity in different brain regions. It allows studying the communication between brain regions during a seizure to see how the ictal neuronal activity propagates. In epilepsy, Granger causality, a statistical concept of causality based on how well one signal can predict another one, has frequently been used to localize the SOZ from ictal IEEG (van Mierlo et al., 2014). The preferred techniques to extract the ictal connectivity pattern are the Partial Directed Coherence (PDC) and the Directed Transfer Function (DTF). PDC models direct connections only, while DTF also considers indirect and cascade connections. Since indirect connections tell us much about where the information originally spread from, DTF is very suitable to localize the SOZ ( van Mierlo et al., 2018). Ding et al. were the first to combine ESI and subsequent connectivity analysis to estimate the SOZ in two studies (Ding et al., 2007;Lu et al., 2012b). Ictal 3-seconds epochs at seizure onset were selected and reconstructed in the source space via ESI. Via DTF, connectivity between the sources' estimated time-series was assessed, and those sources with most outgoing connections were considered to represent the SOZ. In the first study, 32-electrodes EEG recordings of 20 seizures in 5 patients were analyzed. The estimated SOZ were within 15 mm of the presumed EZ, confirmed by MRI visible lesions or SPECT images. However, intracranial recordings were not performed, and no details on surgery or outcome were given (Ding et al., 2007). In the follow-up study, the authors used the same method to compare 76-, 64-, 48-, 32-, and 21electrode setups (Lu et al., 2012b). In 23 seizures of 10 patients with subsequent successful resections, higher numbers of electrodes led to better localizing results. In another study from the same group, Sohrabpour et al. used DSI (see above, decomposition analysis) in combination with connectivity analysis (DTF) to study a patient who had 3 seizures during 76-channel EEG recording (Sohrabpour et al., 2016). The source with the highest out-going connections concorded well with the SOZ defined by IEEG recordings and with the RZ. Although confirming the usefulness of the approach, this was only a single-case study. -Elshoff et al. studied ictal sources and networks in 11 patients who later underwent epilepsy surgery (Elshoff et al., 2013). Other than in the 3 patients with unfavorable surgical outcome, in those 8 patients who were rendered seizure-free, the first two sources identified by ictal ESI were concordant with the RZ on a sub-lobar level. When the authors applied connectivity analysis (renormalized PDC), they found that the network at the onset of the seizure had a star-like topography with the SOZ as the main hub, whereas in the middle of the seizure, it had a circular pattern without a central hub. This provided the first supporting evidence that an epileptic network can dynamically change over the course of a seizure. As a limitation of the study, only one seizure per patient was analyzed. In 2015, Klamer et al. were the first to use Dynamic Causal Modeling (DCM), a technique estimating directed connections based on a set of underlying neural mass models using inference of hidden neuronal states from measurements of brain activity (Klamer et al., 2015). In a patient with musicogenic epilepsy, they localized the onset zone of seizures recorded with simultaneous hd-EEG/MEG. Functional MRI analysis revealed two competing regions of interest (ROI), one frontal and one right mesiotemporal, as the possible SOZ. Two models, each with one ROI acting as autonomous input over the other one, were compared to each other using a Bayesian framework. The model of the right mesiotemporal region driving the frontal ROI explained the ictal EEG better than the other one, and IEEG confirmed the SOZ in the right hippocampus. Although this study focused on two ROIs only, it proved that advanced connectivity modeling of hidden neuronal states is possible and can provide valuable information. Kouti et al. proposed a data-driven approach to define a variable number of ROIs for time-varying source connectivity analysis (Kouti et al., 2019). Spatially compact ROIs were identified based on power of dipoles within a distributed inverse solution, followed by extraction of single time courses from each ROI, and connectivity analysis via adaptive directed transfer function. In 3 patients with pharmacoresistant focal epilepsy, 3-4 ROIs per patient were identified among at least one was inside the clinically determined seizure onset zone. Still, only 3 out of 23 available patient datasets were examined, and comparative results of other methods were not shown. Most recently, Lopes et al. used a total of 15 ROIs for lateralizing the SOZ based on low-density EEG data of 62 seizures in 15 patients (Lopes et al., 2020). Functional networks were computed using the phase locking value, and a model of ictogenicity was applied to predict the relative importance of each ROI to the network's ability to generate seizures. The model predicted the hemisphere of surgery correctly in 6 out of 10 patients with favorable surgical outcome but only in 1 out of 5 patients with unfavorable outcome. In 4 out of all 15 patients, predicted ROIs were found in both hemispheres which was rated as inconclusive. Staljanssens et al. applied ESI and connectivity analysis (DTF) on the first 3 s of a seizure recorded with 256-channels EEG in 5 patients (Staljanssens et al., 2017a). Considering the source with highest outgoing connections as the SOZ, this estimation was inside the resected area in 4/5 patients and within 10 mm in 5/5 patients. For comparison, selecting the source map maximum alone (ESI without connectivity) resulted in estimation inside the RZ only in 1/5 patients and within 10 mm in 2/5. Fig. 4 illustrates the two approaches, ESI power alone vs. ESI followed by connectivity analysis. Reducing the number of electrodes used for the analysis led to decreased concordance with the resection. If the EEG was down sampled to 32 electrodes, the combination of ESI and connectivity analysis was capable of estimating the SOZ within 10 mm of the RZ in 1 patient only, while ESI alone estimated the SOZ within 10 mm in 2/5 patients. -Using a similar methodology in 6 seizures of 3 patients who later were rendered seizure-free by surgery, Martinez-Vargas et al. showed that ESI followed by connectivity analysis can localize the SOZ from clinical 27-channels EEG (Martinez-Vargas et al., 2017). The approach localized the RZ in 5/6 seizures while in the remaining seizure, localization was within 10 mm of the resection. In a follow-up study, Staljanssens et al. adapted the hd-EEG approach to clinical long-term EEG (Staljanssens et al., 2017b). In 111 seizures of 27 patients with favorable surgical outcome, an artifact-free EEG epoch was studied with connectivity analysis restricted to the seizure's frequency band. The SOZ was within 10 mm of the RZ's border in 94% of the seizures. Here again, ESI followed by connectivity analysis performed significantly better than the ESI source map maximum alone. This study was the first to obtain high accuracy for SOZ localization using ESI and functional connectivity analysis in a larger patient cohort, based on widely available clinical recordings with standard electrode numbers. However, the study cohort included almost exclusively patients with TLE, and the method's specificity was not assessed. Most recently, the authors validated the same approach in 24 patients with various forms of ETLE (Vespa et al., 2020). Following visual identification of ictal EEG epochs, a spectrogram-based algorithm selected the optimal time window and frequency of interest for ESI and connectivity analysis. In a total of 94 seizures, additional functional connectivity analysis raised the specificity of ESI from 36% to 52% and the specificity from 62% to 84%. Connectivity analysis also helped to substantially increase the inter-rater agreement for the interpretation of the results. As a limitation of the method, it was not applicable to 10 patients who had no rhythmical EEG activity during their seizures. Helpfulness in clinical decision-making For clinical ESI, not only feasibility and accuracy are important, but also the yield of non-redundant information and the influence on the process of clinical decision-making. Already in 2002, Boon et al. prospectively assessed the added diagnostic value of ESI to clinical decision making in presurgical epilepsy evaluation (Boon et al., 2002). In 14% of 100 patients, ictal ESI influenced the presurgical decision-making process. The consequence of ictal ESI was mainly to avoid IEEG because the candidates appeared unsuitable for resection, as ESI supported initial incongruences between MRI and conventional ictal EEG results. Here, using ictal ESI for the decision process limited interpretation of its contribution to predict surgical outcome because it is unknown whether the respective patients truly could not have benefited from epilepsy surgery. More recently, Foged et al. compared the added diagnostic value of low-density ictal and interictal ESI (25 EEG electrodes) to highdensity interictal ESI (256 electrodes) in 82 patients (Foged et al., 2020). Information obtained from low-density ESI (ictal and interictal) changed the presurgical management plan in 20% of the patients while hd-ESI (interictal only) changed it in 28%. In 80% of cases, both ESI methods yielded congruent results. Congruencies of ictal vs. interictal low-density ESI results were not detailed. Nevertheless, the results indicate that a relevant proportion of patients evaluated for epilepsy surgery can benefit from ictal ESI. Discussion and future perspectives We reviewed the literature about ictal EEG source localization in detail. Overall, the 67 studies on a total of almost 1,000 patients with epilepsy show that ictal ESI is feasible and informative. Nev- Fig. 4. Seizure onset zone localization approach used by (Staljanssens et al., 2017a, Staljanssens et al., 2017b. EEG source imaging with and without subsequent functional connectivity analysis were compared. A segment at the beginning of the seizure was chosen and localized. Local maxima in the source map were selected and their corresponding time-series were further processed with a power metric on the one hand and with connectivity analysis on the other hand. ertheless, there are several limitations that we will discuss in the sections below. We will also comment on further steps towards bringing ictal ESI into clinical practice. Where are we now? Thanks to research that started two and a half decades ago, current state-of-the-art ictal ESI can provide a more accurate and a more objective interpretation than visual inspection of ictal EEG alone. Contrary to other second-line presurgical diagnostic methods like MEG, fMRI and ictal SPECT, ictal EEG data are already available for almost every patient undergoing presurgical evaluation worldwide. Ictal ESI achieves similar sensitivities and specificities as ictal SPECT and MEG, but tracers or additional scanning facilities are not required, and it is highly cost-effective. Therefore, in theory, ictal ESI based on clinical video-EEG monitoring can easily be included in the non-invasive phase of the evaluation process. Its results can have an impact on if and which additional investigations might be needed depending on other clinical parameters. In certain situations, patients may benefit from less diagnostic procedures, financial costs can decrease, and patient throughput can be increased. Nevertheless, and despite many promising results, the method has not yet found its way into clinical practice because of several reasons. Since the first ictal ESI studies in the 1990s, a plethora of different methodological approaches have been proposed to localize the SOZ from ictal scalp EEG. As detailed above, these include various spherical or realistic head models with 3-7 compartments, different single-dipole or distributed inverse solutions, spectral analyses or decomposition methods for EEG data preprocessing, and subsequent functional connectivity analyses. Unfortunately, the wide spectrum of strategies hampers detailed comparisons. Additionally, the methodological complexity of ictal ESI currently renders it a tool for researchers and expert users only. Although ictal EEG data are obtained in every surgical epilepsy center, and both commercial and free ESI software packages are available, the knowledge of how to use them is narrowly distributed. Even though most recent methods are fairly objective, a minimum of human intervention is needed to start the analysis, e.g. epoch selection, ictal spike definition for averaging, or definition of the frequency band of interest. This increases the manpower needed and potentially decreases the reproducibility of results and the spread of the technique. By comparison, ESI of interictal epileptic discharges is rather straightforward, and best practice is better documented. IEDs are usually more frequent than seizures and less often corrupted by artifacts, they display relatively simple spatio-temporal dynamics and are easy to average. Nevertheless, presurgical decisionmaking aims at localizing the seizure onset and not necessarily the origin of interictal spikes. Therefore, theoretically, ESI to localize the SOZ should be more informative than ESI of the IZ. Several authors validated their ictal ESI results based on interictal ESI (see Table 1). However, until now, there are no large studies that show a superiority of ictal ESI compared to interictal ESI. In a recent metaanalysis of 6 studies on 159 patients validated by surgical outcome, ictal ESI had a sensitivity of 90%, a specificity of 47%, and an overall accuracy of 75%. The diagnostic odds ratio of favorable seizure outcome if the ESI-SOZ was resected was 7.9. For comparison, interictal ESI had a sensitivity of 80% and a diagnostic odds ratio of 4.0 while specificity and overall accuracy were the same as in ictal ESI (Sharma et al., 2019). One can conclude that there is at least a trend towards more accuracy in ictal vs. interictal ESI. As described in the introduction of this article, the biggest need for additional diagnostic methods exists in cases with incongruent or insufficient information from the first-line diagnostic workup. Most often, this relates to patients with normal structural MRI (''MR-negative" or ''non-lesional" cases) and/or ETLE. However, very few studies included more than a fistful of ELTE cases (see Table 1), and only five of these focused on ETLE specifically (Despotovic et al., 2012;Elwan et al., 2013;Kovac et al., 2014;Pellegrino et al., 2016;Vespa et al., 2020). The most recent one was also the largest: Vespa et al. demonstrated the usefulness of low-density ESI and subsequent connectivity analysis in 24 patients with ETLE. Regarding MRI-negative epilepsy, one single study focused on non-lesional epilepsy, and this included 5 patients (Batista Garcia-Ramo et al., 2019). Hence, the clinical value of ictal ESI in these important conditions has not yet been assessed systematically enough. What is needed to bring the technique into clinical practice? Further advances in the various methodologies of ictal ESI can be expected which will further increase its accuracy. For example, until now, there are different techniques to deal with the low SNR of ictal EEG signals. While some studies found optimal localization results at the seizure's very beginning, other studies show that data quality appears more important. The best compromise may vary depending on individual early ictal signal changes and on biological or technical artifacts, and a systematic approach remains to be found. In addition, it is likely that EEG data decomposition techniques and connectivity analyses will be refined and that best practice guidelines will evolve with future validation studies (He et al., 2019;Babiloni et al., 2020). During the evolution of EEG-based SOZ localization from a rather experimental technique to a widely established clinical tool, it is likely that a handful of specific approaches emerge on which the community can agree on as a standard. These need to provide a fair cost-benefit ratio in terms of accuracy, reproducibility and reliability of the results on the one hand, and manpower and skills needed on the other hand. A possible solution that can help bringing ictal ESI to clinical practice is automated analysis. For interictal ESI, there are already automated methods that detect interictal spikes and localize them with sensitivities of 79-88% ( van Mierlo et al., 2017;Baroumand et al., 2018). Automated ictal ESI as proposed in (Koren et al., 2018) has the potential to be rapidly included in the presurgical evaluation. High spatial sampling via use of hd-EEG setups seems to increase the accuracy of ictal ESI and subsequent connectivity analyses substantially. Unfortunately, such setups are not available in all presurgical centers, and the cost of EEG amplifiers strongly depend on the number of channels. On the other hand, the most substantial increase in accuracy was shown for 64-to 128-channel EEG in comparison to 32 electrodes or less. Therefore, it may not be necessary to acquire full 256-channel EEG hardware. Finally, since ictal ESI is much more difficult to perform than interictal ESI, we need an answer to the question: In which patients does interictal ESI provide sufficiently accurate information, and who needs additional ictal ESI? Given that interictal ESI already has a sensitivity of 70-80% to localize the EZ (Sharma et al., 2019), ictal ESI could confirm results of interictal ESI and other localization techniques or provide an additional localizing hypothesis in patients with discordant non-invasive results who are candidates for IEEG. Like (Foged et al., 2020), future studies should target the added value of ictal ESI in comparison to interictal ESI that is already taking an increasingly prominent role in the presurgical evaluation in epilepsy centers worldwide. Ideally, these studies should be prospective and multicenter-based, and they need to put an emphasis on ETLE and MRI-negative cases, since these are usually the most difficult cases to evaluate for epilepsy surgery, and ictal ESI could be particularly helpful here. Conclusion Ictal ESI is a promising neurophysiological tool to approach the epileptogenic zone in pharmacoresistant focal epilepsy, especially in ETLE and MR-negative cases. Connectivity analyses have been shown to add substantial information to pure ESI, while automated approaches promise to lower the required efforts. Still, more research is needed to replicate, compare, and extend prior findings with validation in large and heterogeneous patient groups. In this way, the algorithms' performance can be compared across patient characteristics (EEG ictal pattern, focus localization, MRI lesions, and surgical outcomes), and the added value of ictal ESI can be compared to interictal ESI and current state-of-the-art presurgical tools. Declaration of Competing Interest Pieter van Mierlo, Margitta Seeck
v3-fos-license
2021-03-29T05:25:17.932Z
2021-03-01T00:00:00.000
232382876
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4923/13/3/340/pdf", "pdf_hash": "9dbc974a2aaa98e91f7e99c927716b02fedba545", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41758", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "9dbc974a2aaa98e91f7e99c927716b02fedba545", "year": 2021 }
pes2o/s2orc
The Combination of Tissue-Engineered Blood Vessel Constructs and Parallel Flow Chamber Provides a Potential Alternative to In Vivo Drug Testing Models Cardiovascular disease is a major cause of death globally. This has led to significant efforts to develop new anti-thrombotic therapies or re-purpose existing drugs to treat cardiovascular diseases. Due to difficulties of obtaining healthy human blood vessel tissues to recreate in vivo conditions, pre-clinical testing of these drugs currently requires significant use of animal experimentation, however, the successful translation of drugs from animal tests to use in humans is poor. Developing humanised drug test models that better replicate the human vasculature will help to develop anti-thrombotic therapies more rapidly. Tissue-engineered human blood vessel (TEBV) models were fabricated with biomimetic matrix and cellular components. The pro- and anti-aggregatory properties of both intact and FeCl3-injured TEBVs were assessed under physiological flow conditions using a modified parallel-plate flow chamber. These were perfused with fluorescently labelled human platelets and endothelial progenitor cells (EPCs), and their responses were monitored in real-time using fluorescent imaging. An endothelium-free TEBV exhibited the capacity to trigger platelet activation and aggregation in a shear stress-dependent manner, similar to the responses observed in vivo. Ketamine is commonly used as an anaesthetic in current in vivo models, but this drug significantly inhibited platelet aggregation on the injured TEBV. Atorvastatin was also shown to enhance EPC attachment on the injured TEBV. The TEBV, when perfused with human blood or blood components under physiological conditions, provides a powerful alternative to current in vivo drug testing models to assess their effects on thrombus formation and EPC recruitment. Introduction Cardiovascular diseases are among the leading causes of mortality and morbidity worldwide. These are caused by aberrant platelet activation caused by endothelial dysfunction and exposure of plasma to collagen-and tissue factor-rich atherosclerotic plaques. However, it is not possible to study this practically or ethically in human patients. Most studies assessing the effect of drugs on cardiovascular disease rely on animal models to predict and explain their effects in humans [1]. Different animal species have been used to evaluate certain features of cardiovascular disease, such as zebrafish, pigs, rabbits, and rodents. Mice have become the animal of choice for disease modelling given their genetic similarity to humans, their fast breeding rate, and well-established methods for creating genetic knock-outs [2,3]. Additionally, intravital microscopy allows the real-time examination of thrombus formation on artificial vessel injuries in response to ferric chloride (FeCl 3 ) or laser injury [4]. These arterial thrombosis models have become popular for examining the molecular mechanisms underlying thrombus formation and how these can be impacted by drug treatments. The interpretation of the data obtained from murine thrombosis models is complicated by the use of anaesthetics. A survey of investigators performing intravital microscopy in murine thrombosis models found that ketamine, xylazine, and pentobarbital are the most commonly used anaesthetics [5]. However, previous studies have demonstrated that each of these anaesthetics can have an inhibitory effect on platelet function [5][6][7]. For instance, ketamine inhibited platelet aggregation through the suppression of IP 3 formation and also by inhibiting thromboxane synthase activity [7,8]. Additionally, ketamine can also interfere with endothelial nitric oxide production, as well as smooth muscle Ca 2+ signalling [9,10]. This suggests that the use of ketamine in intravital microscopy studies could create a baseline inhibition of platelet function as well as modulation of normal haemostatic properties of the vessel wall, which could overestimate the effect of genetic knockouts or drug treatments on normal haemostatic responses. These shortcomings provide an opportunity to create alternative thrombosis models by recreating normal haemostatic conditions by flowing human blood through human tissue-engineered arterial constructs. Tissue-engineered arteries were initially produced to use as alternatives to autologous vessels for vascular grafting. Vascular tissue engineering was pioneered in 1986 by Weinberg and Bell, who generated the first tissueengineered blood vessels (TEBVs) by culturing vascular cells on a collagen-based scaffold [11]. Nearly 40 years later, there are few TEBVs currently being used in clinical application. However, great progress has been made in improving the biomimicry of TEBVs. A number of previous studies have demonstrated that it is possible to generate tissue-engineered arteries through a variety of methods that can withstand normal arterial blood flow conditions whilst replicating the functional properties of the native arteries [12][13][14]. This has been achieved through using a variety of approaches including the use of a variety of scaffold material both synthetic (e.g., polyvinyl alcohol and gelatin [15]) and natural extracellular matrix molecules (collagen, elastin [16]). The properties of these scaffolds can be further modified to increase their mechanical strength through compression or chemical crosslinking or made more porous by freeze-drying [16]. Furthermore, the use of bioreactors has been influential in producing ideal culture conditions for vascular cells to ensure they assume the cellular phenotypes found in vivo [17]. These approaches have created TEBVs that possess a number of desirable properties such as the ability to support physiological spiral laminar flow [15], to mechanically withstand physiological arterial blood pressures [15,16], and to support the growth of a healthy endothelial cell lining [17]. This is consistent with the key parameters required for a substrate for clinical vascular grafting. These studies have commonly assessed the ability of the TEBVs to withstand activation of haemostatic and inflammatory responses of blood cells flowing through them. However, to utilise TEBVs as an animal-free alternative to current in vivo arterial thrombosis models requires a demonstration that they are capable of eliciting appropriate cellular reactions upon damage and that drug treatments are able to modulate that response. Previously, we have utilised tissue engineering approaches to create human arterial models that replicate the normal haemostatic properties of the intimal and medial lining of human arteries [18]. This includes the use of an electrospun polylactic acid (PLA) nanofiber scaffold to create an intimal layer construct that provides contact guidance to ensure that the endothelial cells can be aligned in the direction of flow, similar to the native artery. The medial layer construct is formed by human coronary artery smooth muscle cells cultured within a collagen hydrogel. Musa et al. (2016) showed that their tissue-engineered blood vessels are able to replicate the anti-and pro-aggregatory properties of native arteries when the intimal layer is intact and absent, respectively [18]. Our real-time spectrofluorimetry measurements of cytosolic Ca 2+ signalling provided a sensitive method to assess platelet activation upon exposure to the tissue-engineered constructs. These results clearly demonstrate that the intimal, medial, and full blood vessel constructs replicate the in vivo ability to modulate platelet function [18]. The thrombus formation upon the surface of the construct indicated by aggregated DiOC 6 -labelled platelets under a fluorescent microscope can be visualized in the consequent examination. However, these studies were performed under non-physiological mixing conditions. We have not previously examined the ability of the constructs to support thrombus formation under physiologically relevant shear stress from perfusion of platelets. The flexibility of a layer-by-layer fabrication approach in tissue engineering in conjunction with a perfusion device offers a great opportunity to study endothelial dysfunction and repair mechanisms. Endothelial function is an important and independent predictor for the severity of cardiovascular disease. An impaired endothelium is a key driver in the development of cardiovascular disease [19]. Circulating bone marrowderived endothelial progenitor cells (EPCs) have been found to correlate to endothelial function and to aid in neovascularisation and re-endothelialisation of injured vessels, maintaining vascular function and homoeostasis [19,20]. In models of myocardial infarctions and arterial injury, EPCs have been shown to localize preferentially to sites of vascular lesions, after which they divide, proliferate, and become incorporated into the endothelial layer of existing vessels, and promote the outgrowth of new vascular networks. These cells also have an effect on surrounding cells by producing angiogenic growth factors [21,22]. The most common drugs used to treat/prevent the development of cardiovascular disease are statins, with atorvastatin being the most well-known. This drug has been in use for decades, and its effects have been extensively studied. Some of these include reduction of the accumulation of esterified cholesterol into macrophages, increase of endothelial nitric oxide (NO) synthase, reduction of the inflammatory process, increased stability of the atherosclerotic plaques, and restoration of platelets activity and of the coagulation process [23,24]. Despite the known pleiotropic actions of atorvastatin, there is currently limited data on the impact this drug has on EPC ability to mediate endothelium repair. In this study, we aimed to examine whether our tissue-engineered (TE) human arterial models were able to mimic the pro-and anti-aggregatory properties of the damaged and intact artery under physiological flow conditions. We also aimed to examine whether our tissue-engineered arterial constructs could support EPC recruitment and whether this could be modulated by drug treatments. This was achieved by incorporating the constructs within a commercially available parallel-plate flow chamber and perfusing them with washed human platelet suspensions at arterial shear stresses. We examined whether this biomimetic test model system could be used as a potential alternative to in vivo drug testing models in thrombosis and EPC homing. This system was used to perfuse platelets and various cell populations over the TE constructs at physiologically relevant or pathological shear stress, allowing the real-time profiling of their interactions, and for the evaluation of changes in both the surface and structure of the blood vessel, as well as changes in the perfusate. The impact of ketamine on platelet activation and the effect of atorvastatin on EPC homing when EPC being exposed to TEBVs with a FeCl 3 -induced lesion were investigated. Through these studies, we demonstrated that human tissue-engineered arterial constructs, when perfused with human blood or freshly prepared washed human platelet suspension under physiological conditions, provide a human model system that can be used to study the effect of drugs without the potential confounding impact of species differences and use of anaesthetics. Fabrication of 3D Tissue-Engineered Blood Vessel Constructs Fabrication of 3D tissue-engineered intimal layers (TEILs), media layers (TEMLs), and the complete tissue-engineered blood vessel was achieved using human umbilical vein endothelial cells (HUVECs) and human cardiac artery smooth muscle cells (HCASMCs), both obtained from GIBCO, Life Technologies. Cells were cultured with medium 200 and 231, respectively, also obtained from GIBCO, Life Technologies, and used between passage numbers 2 and 5. The construction of the TEIL, TEML, and TEBV constructs was performed using our previously described methodology [18], as such these protocols are outlined briefly below. Electrospinning Aligned nanofibers were made by dissolving Poly-L,D-lactic acid (96% L/4% D, inherent viscosity of 5.21 dL/g, Purac BV, Gorinchem, the Netherlands) (PLA) in a 7:3 mixture of chloroform and dimethylformamide (DMF) (Sigma, Welwyn Garden City, UK) into 2% solution. The operational parameters of nanofiber fabrication followed the established protocol [25]. In brief, this 2% PLA solution was deposited onto detachable metal collectors, comprised of two partially insulated steel blades (30 cm × 10 cm), and connected to a permanent copper plate with a steel wire. The two steel blades had a gap of 5 cm between them where the fibers were deposited. Deposition of the fibers involved connecting the permanent plate to a negative electrode, and a syringe containing the solution was connected to a positive electrode. The PLA was extruded through an 18G needle and delivered at a rate of 0.025 mL/min. The electrodes were electrified with a power supply charged at ±6 kV (Spellman HV, Pulborough, UK). Nanofibers were collected and affixed onto acetate frames and were sterilized by UV irradiation thrice per side before use in culture. The nanofiber diameter was measured as~500 µm and the mat thickness~3 µm [25]. The porosity of the mat was smaller than 1 µm since no endothelial cells were observed to migrate through the nanofiber layer [18]. TEML Assembly To create TEML constructs, HCASMCs, at a density of 5 × 10 5 cells /mL, were mixed with neutralized 3 mg/mL type I collagen (Corning) solution. Two hundred microlitres of this solution was loaded onto 0.5 cm × 2.0 cm filter paper frames, which fit the dimensions of the parallel-plate flow chamber. The formed TEML constructs were used when the HCASMCs attained typical spindle-shaped morphology and reached confluence. TEMLs were cultured in whole medium 231, with media changes every 2 days for up to 10 days. TEIL Assembly To prepare TEIL constructs, a neutralized acellular collagen gel (3 mg/mL), having the same dimensions detailed above, was formed first. Once the gel set, aligned PLA nanofibers [18], coated in 10 ng/mL fibronectin, were placed on the surface of the gel. HUVECs were then seeded at a density of 2 × 10 5 cells/mL on the nanofibers. The TEIL samples were cultured in whole medium 200, with media changes every 2 days, for 10 days to allow attainment of normal cell morphology and surface area coverage. TEBV Assembly This model was a combination of the TEIL and TEML. TEML was created first and HUVECs seeded, as previously described, on fibronectin-coated PLA nanofibers after HCASMCs attained desired spindle-shaped morphology. The complete TEBV was returned to culture with HCASMC and HUVEC whole media mixed 7:3. The schematic for TEML, TEIL, and TEBV assembly is shown in Figure 1. Perfusion Gaskets To facilitate the perfusion of our 3D vascular models, a specialized gasket was created. The gasket was manufactured using polydimethylsiloxane (PDMS). A circular ring with a diameter of 30 mm was first cut, then a 25 mm (length) × 5 mm (width) × 3 mm (depth) rectangular opening in the centre of the circle ( Figure 2). Parallel-Plate Flow Chamber and Shear Stress The assembled flow chamber was used to generate laminar flow to exert physiological shear stress on the intimal surface of the TEBV. The dimensions of the gasket used comprise and determine the dimensions of the flow chamber. The equation used to determine the shear stress generated on the endothelial surface of the TE constructs is: where u is the viscosity of the fluid being perfused (1.5 Cp), Q being the flow rate (Q; either 0.077 cm 3 /s and 0.007 cm 3 /s), b the width of the gasket opening (5 mm), and h being the height between the upper surface of the construct and top plate of the chamber (2.5 mm). The two fluid-flow rates used provided shear stresses of 22.2 dyne/cm 2 and 2.2 dyne/cm 2 for the performed experiments which are consistent with arterial and venous shear rates respectively [26,27]. Lesion Models To mimic vascular injury, a FeCl 3 lesion was created on the TEBV by dipping a 1 mm 2 square of filter paper in 10% FeCl 3 and placing this onto the upper surface of the TE constructs for 1 min. After this, the TE constructs were washed with PBS/HBS to eliminate excess FeCl 3 , then topped up with fresh media. This mode of lesioning was also applied to TE constructs for EPC perfusion. Platelet Preparation This study was approved by the Keele University (UK) Research Ethics Committee (MH-200155, 1 May 2018). Blood was donated by healthy, drug-free volunteers who gave written informed consent. Blood was obtained by venepuncture. The blood was mixed with acid citrate dextrose (ACD; 85 mM sodium citrate, 78 mM citric acid, and 111 mM D-glucose) at a ratio of 5:1. Platelet-rich plasma (PRP) was obtained by soft descent centrifugation at 725 g for 8 min. After centrifugation, the PRP was collected and treated with aspirin (50 mM) and apyrase (0.1 U/mL). PRP was again centrifuged at 450 g for 20 min, then resuspended in supplemented HEPES-buffered saline (HBS; pH 7.4, 145 mM NaOH, 10 mM HEPES, 10 mM D-glucose, 5 mM KCl, 1 mM MgSO 4 ) to reach a platelet density of 2 × 10 8 cells/mL. The HBS was supplemented with 1 mg/mL bovine serum albumin (BSA), 1.8 mg/mL glucose, 0.1 U/mL apyrase, and 200 µM CaCl 2 . Prior to perfusion, the assembled perfusion system without the TE construct was perfused with 1% BSA solution for 1 h to prevent unwanted platelet adhesion to the components of the perfusion system. The gasket and the spacers were incubated overnight with 1% BSA solution at room temperature. EPC Isolation and Culture EPCs were isolated by collecting 60 mL of whole blood from healthy volunteers. To prevent coagulation, the blood was split into 2 falcon tubes with 5 mL of ACD in each. The blood-anticoagulant mix was then split into 15 mL falcon tubes and centrifuged at 700 g for 8 min. After centrifugation, the sequence of layers occurred as follows (seen from top to bottom): plasma, enriched cell fraction (interphase consisting of lymphocytes/peripheral bone marrow cells (PBMCs), erythrocytes, and granulocytes. The plasma fraction was carefully discarded, leaving approximately 0.5-1 mL above the interface. The enriched cell fraction was pooled into one tube and diluted 1:1 with PBS. To further separate the cell, i.e., eliminate residual plasma and Red Blood Cells (RBCs), the pooled fraction was carefully layered over 8 mL of Ficoll-Paque, ensuring no mixing of the layers, and centrifuged again for 20 min at 400 g. After centrifugation, any residual red blood cells are below the separation medium, and the enriched cell fraction should be immediately above it with diluted plasma and platelets above this. After isolation, the cell-rich fraction was diluted with PBS, then centrifuged again at 400 g for 10 min. After centrifugation, the supernatant was discarded, and the resultant pellet was resuspended in 2 mL of complete EPC media. The cell suspension was then split into 2 wells of a 12-well plate that had been coated with 2.5 µg/cm 2 fibronectin. On day 1, the contents of the wells were agitated and transferred to new wells. This was repeated for the next 3 days. Media was changed daily for the first 7 days then every 2-3 days for up to 20 days. Carboxyfluorescein succinimidyl ester (CFSE) dye was used to label EPC cells at a concentration of 2 µL/mL of cell suspension. Cells were incubated for 15 min at 37 • C, then centrifuged for 3 min at 300 g. The pellet was re-suspended in 5 mL of fresh supplemented media. The cell suspension was allowed to rest for 30 min at 37 • C before loading into the perfusion system. DiOC 6 To facilitate visualization of platelet adhesion and aggregation upon the TE constructs under flow conditions, platelets were labelled with DiOC 6 , a fluorescent membrane dye. Blood was mixed 5:1 with ACD (anticoagulant). The anticoagulant was mixed with the membrane dye to make a final concentration of 1µM, prior to the addition of whole blood. This mixture was then incubated for 10 min at room temperature, then centrifuged to obtain PRP. Centrifugation was done at 1500 g for 8 min. The resultant PRP was then treated with 100 µM aspirin and 0.1 U/mL apyrase. This was followed by a centrifugation wash at 350 g for 20 min. The platelet pellet was then re-suspended with supplemented HBS, creating a final cell density of 2 × 10 8 cells/mL. Ketamine Treatment To assess if the platelet responses might be affected by ketamine treatment, experiments were performed in which both platelets and the TE constructs were pre-treated with ketamine before the perfusion of platelets on TE constructs. TEMLs were incubated with 1 mM ketamine (Narketan) or HBS for 1 h at 37 • C, after which they were perfused with washed DiOC 6 -labelled platelets incubated with either 300 µM ketamine or an equivalent volume of the vehicle, or HBS, under the same conditions stated above. Platelet aggregation on the perfused TE constructs was evaluated by fluorescence microscopy (Leica MSV269) under excitation wavelength of 485 nm and emission of 501 nm. As a static comparative, TE constructs were treated with 1 mM ketamine and placed in a cuvette with DiOC 6 -labelled platelets treated with ketamine or HBS for 15 min at 37 • C. Whilst untreated TEML constructs were placed atop untreated platelet samples as the control. Platelet Aggregometry Platelet aggregometry was performed using a modification of the previously published technique [28]. Following platelet incubation with TE constructs, 200 µL of the platelet suspension was transferred into a 96-well plate and then placed into a plate reader prewarmed to 37 • C (BioTek Synergy 2 microplate, Winooski, VT, USA). Baseline absorbance readings were taken once at a wavelength of 600 nm, obtaining an absolute absorbance reading post TE construct incubation. In the present assay, the plate reader was set up to use a fast-shaking mode between absorbance readings to aid in sample mixing. Atorvastatin Treatment Following FeCl 3 lesioning, the TE constructs were incubated with 60 µg/mL atorvastatin calcium trihydrate (Sanofi) for 3 and 5 h at 37 • C. This was followed by the perfusion of CFSE-labelled EPCs (without atorvastatin in EPC perfusate as control) for 45 min in the parallel-plate chamber. Images were taken using a Leica inverted microscope (Leica MSV269) under an excitation wavelength of 485 nm and emission of 501 nm. Cell attachment was quantified with ImageJ. Statistics and Data Analysis Values stated are mean ± SEM of the number of observations (n) indicated. Analysis of statistical significance was performed using a two-tailed Student's t-test as well as twoway analysis of variance (ANOVA), confirmed using the Brown-Forsythe test. p < 0.05 was considered statistically significant. Results The adhesion and aggregation of platelets by exposure to the TE constructs under dynamic flow conditions were assessed by monitoring fluorescence from DiOC 6 -labelled human platelets upon the surface of the TE constructs (solid phase activation), as well as monitoring activation of platelets remaining within the solution by platelet aggregometry (liquid phase activation). Three types of TE constructs were evaluated, TEIL, TEML, and TEBV, with the acellular collagen hydrogel as a negative control as we have previ-ously demonstrated this to not elicit platelet activation [18]. The homing effect of drug atorvastatin on EPC attachment on these TE constructs under dynamic flow conditions was assessed. Cells in TE Constructs Attained Typical Morphology and Organisation Consistent with our previous findings, it was possible to generate TEIL, TEML, and TEBV constructs with HUVECs and HCASMCs showing typical normal cellular morphology when grown atop (HUVECs) or within (HCASMCs) the collagen hydrogel scaffold using our previously published layer-by-layer fabrication technique [18]. Figure 3 demonstrates the effect of aligned nanofibers on HUVEC alignment, with cells showing more organised growth/orientation compared to culture flasks. Meanwhile, smooth muscle cells maintain spindle-shaped morphology while embedded in the collagen gel (data not shown [18]). TEML Supports Shear-Dependent Platelet Aggregation under Physiological Flow Conditions We first assessed whether the TEML is able to support platelet aggregation under two different shear stresses indicative of those found in the arterial (22.2 dyne/cm 2 ) and in the venous circulation (2.2 dynes/cm 2 ; [29]). The TEML construct is a model for an endothelium-denuded blood vessel and therefore can be used to assess pro-aggregatory properties of the medial layer. Figure 4A,B show that acellular collagen gels did not trigger platelets' activation under both low and high shear stress rate, consistent with our previous findings under static conditions [18]. When the TEML constructs were exposed to platelets perfused at venous shear stresses, sporadic platelet adhesion was observed ( Figure 4C). The strong platelet aggregation observed under arterial shear stresses demonstrates that shear stress and fluid flow influence platelet adhesion ( Figure 4D). These studies are consistent with a number of previous studies that have demonstrated a role for sheardependent platelet aggregation in driving thrombus formation [30,31]. The platelets adhered and significantly aggregated when exposing to the media layer of the constructs ( Figure 4D). Since no aggregation was observed on the acellular hydrogels ( Figure 4A,B), platelet aggregation should be triggered by neo-collagen produced by the embedded HCASMCs. This corresponded well with previous studies that demonstrated that the pro-aggregatory properties of the TEML were mainly attributed to the native collagen secretion of the SMCs [32] and is consistent with our previous work under stirred conditions [18]. Additional synthesis and secretion of thrombogenic molecules may also contribute to platelet activation and aggregation. TEBVs Prevent Platelet Aggregation under Physiological Flow Conditions The endothelial lining of the native human artery produces platelet inhibitors such as nitric oxide (NO) and prostaglandins to prevent thrombosis. Upon vascular damage, the loss of the antithrombotic endothelial lining, and the exposure of the prothrombotic properties of the medial layer, triggers thrombus formation. To test whether the TEBV is able to replicate this endothelium-dependent modulation of the haemostatic properties of the construct under physiological flow conditions, TEBVs with differences in the integrity of the endothelial cell layer were exposed to human platelet suspension under arterial shear stresses. In these experiments, we used (i) a TEBV with a fully confluent endothelial layer, (ii) a TEBV with a partially confluent endothelial layer, and (iii) a TEBV in which an intimal injury was triggered with FeCl 3 , a common injury model used in murine thrombosis models [33]. Real-time fluorescence imaging revealed that TEBVs with an intact, confluent endothelial layer did not show platelet aggregation upon their surfaces over a 15 min period of perfusion ( Figure 5I (A)). In contrast, the TEBV with a partially confluent endothelial layer exhibited limited platelet aggregation. Platelets adhered only in areas that were not covered with endothelial cells, allowing their direct contact with the media layer. However, not all the areas lacking endothelial coverage showed platelet aggregation, suggesting that the endothelial cells on the TEIL may be capable of secreting sufficient anti-thrombotic molecules to prevent platelet aggregation ( Figure 5I (B,C)). Dramatic contrast observations of platelets' aggregation behaviours on FeCl 3 -lesioned TEBV samples were revealed as shown in Figure 5I (D), in which massive platelet aggregates are attached to the constructs. The insert picture shows the dense morphology of the aggregated platelets, and the low magnification image shows the overall localisation of the aggregates. The strip-like aggregate location was larger than the lesion area (1 mm 2 ), implying that cytokines produced by lesioned endothelial layer and exposed subendothelial proteins (collagens in our model) could be circulated away from the lesion site, triggering aggregation away from the lesion site. FeCl 3 -mediated endothelial cell injury allowed exposure of platelets to the pro-aggregatory medial layer in the construct, providing a reliable FeCl 3 -triggered arterial injury model. These qualitative results correspond well with the comparison of the quantitative aggregation state of the platelets before and after perfusion, in which FeCl 3 injury could be seen to trigger aggregation of platelets within the platelet suspension ( Figure 5II). In regards to the integrity of the endothelial layer, it can be observed that measurements of the absorbance, before and after perfusion, do not present a significant difference in the TEBV with a full endothelial layer. This indicates that platelet aggregation is prevented by the presence of an intact endothelial layer. On the contrary, significant differences in platelet aggregation were observed in FeCl 3 -treated TEBVs. This is consistent with platelets producing and releasing autocrine activators to recruit platelets to the growing thrombi, thus triggering platelet aggregation within the platelet suspension. Ketamine Inhibited Platelet Aggregation at Arterial Shear Stress The data above demonstrate that the TEML, and the FeCl 3 -treated TEBV, trigger platelet aggregation through the exposure of the pro-aggregatory medial layer in the endothelial denuded section of these constructs. Thus, our TEBV perfusion system can be used in conjunction with the FeCl 3 injury model, which is commonly used in murine thrombosis models [33]. As our human TEBV model does not require the addition of anaesthetics to our blood samples, experiments were performed to assess if the addition of ketamine, the most commonly used anaesthetic in murine thrombosis models, could artificially alter the platelet aggregatory responses seen. As the TEML provides a more pronounced aggregatory response due to the absence of an endothelial lining, we used this system to investigate whether ketamine could impact thrombus formation by human platelets under physiological flow conditions. DiOC 6 -labelled platelets were treated with 300 µM ketamine prior to perfusion over the TEML surface under arterial shear stress (22.2 dynes/cm 2 ). Significant platelet aggregation was found in the TEML perfused at high shear stress without ketamine treatment ( Figure 6). The platelets treated with ketamine were found to lose their ability to adhere to the TEML surface. These results indicate that ketamine-treated platelets ( Figure 6I (B)) were less reactive than untreated platelets ( Figure 6I (A)), which showed significant aggregation on the surface. Ketamine not only significantly inhibited platelet aggregation on the construct surface but also inhibited their activation within the surrounding platelet suspension, as demonstrated by the aggregation state of platelets before and after exposure to the TEML ( Figure 6II). This is likely due to the known effect of ketamine inhibiting platelet Ca 2+ signalling [34,35]. This would prevent dense granule secretion, which would in turn prevent the activation of these cells in the perfused platelet suspension. To further confirm the inhibitory effect of ketamine on platelets, the platelets treated or untreated with ketamine were exposed to corresponding TEMLs in cuvette holders under continuous magnetic stirring. Representative images are shown in Figure 7. Fluorescent imaging of the TEML surface exposed to DiOC 6 -labelled platelets showed that samples treated with ketamine had fewer platelet aggregates appearing on the TEML surface ( Figure 7D-F). The untreated samples ( Figure 7A-C) displayed greater adhesion, as well as formation of multiple platelet aggregates. Atorvastatin Increases EPC Attachment Atorvastatin has been reported to increase circulating numbers of EPCs [23]. To evaluate whether atorvastatin also has an effect on the recruitment and attachment of perfused EPCs, our three TE construct variants were lesioned with FeCl 3 , then incubated with 60 µg/mL atorvastatin for 5 h. Constructs were then perfused with EPCs for 45 min at 22.2 dynes/cm 2 . EPCs were perfused at a density of 1 × 10 4 cells/mL. After perfusion, constructs were imaged and attached cells quantified (Figure 8). Across all the models shown here, it was evident that atorvastatin incubation increased the number of cells that attached to the lesioned surfaces of the constructs ( Figure 8II). We have previously demonstrated the pro aggregatory properties of the TEML, and Figure 8 suggests that atorvastatin increases these pro-aggregatory properties. The data presented here suggest that without either the endothelial (TEML) or medial (TEIL) layers, the response is stronger than when both layers are present, suggesting a synergistic effect between the medial and intimal layers in terms of modulating cell recruitment. The almost steady state of attachment of EPCs on the TEML suggests that the FeCl 3 lesion is not the driving factor for cell attachment but rather time-dependent signalling/cytokine production by the representative cells and the presence of atorvastatin. Discussion Being able to rapidly and effectively screen novel and pre-existing drug therapies for the treatment of cardiovascular disease in a human model system would provide a significant advance in our abilities to treat patients at risk of a thrombotic event. In this paper, we demonstrate that our human TEBVs are able to effectively trigger both the activation of thrombus formation and EPC recruitment to vascular injury under physiological flow conditions. Through the use of this humanised experimental system, we should be able to better determine effective drug concentrations and combinations prior to clinical trials. Additionally, it would reduce our need to perform costly, ethically challenging preclinical trials on animals, which require the use of anaesthetics that may significantly impact the results of these trials. Thus, tissue engineered human blood vessel models show promise in improving the translational potential of preclinical studies of drug delivery, drug action, and drug discovery in pharmaceutical research. Through developing PDMS sample holders and perfusion gaskets, we were able to successfully modify a commercially available parallel flow chamber to incorporate our previously described TEBV. This allowed us to examine whether these constructs are able to support thrombus activation and EPC recruitment under physiological flow conditions. In this study, we confirmed that, similar to static conditions, our acellular type I rat collagen hydrogel is unable to support significant human platelet activation under physiological flow conditions. However, our TEML constructs have been shown to produce type I and III neo-collagen that is able to trigger significant platelet activation [18]. Here we demonstrate that these are able to support significant platelet activation under arterial shear stresses but not under those more typically found in large veins. This is consistent with previous work [36,37] demonstrating that thrombus formation is regulated by shear-dependent platelet aggregation, thus confirming that our TEBV can replicate the normal haemostatic properties of native blood vessels. In contrast, the presence of an intact intimal layer completely blocks the activation of human platelets perfused over the surface of the TEBV. However, when the endothelial layer was impaired, either by incomplete coverage of endothelial cells (via a shortened culture period) or via FeCl 3 -induced injury, the TEBV constructs were able to trigger an effective haemostatic reaction consistent with that seen in the native artery ( Figure 5). FeCl 3 -injured TEML also displayed platelet aggregation under flow, an observation that was absent on uninjured constructs. Location of FeCl 3 application had no impact on this observation, demonstrating that the observed effect is due to endothelial damage caused by FeCl 3 and not via the presence of FeCl 3 itself [33,38]. This can be concluded as no enhancement of platelet activation and aggregation observed upon FeCl 3 treatment of the endothelial-free TEML constructs (data not shown). The data presented here support the traditional explanation that FeCl 3 -induced injury results in thrombus formation in murine arteries through endothelial denudation, facilitating platelet activation upon the exposure of sub-endothelial collagen. These findings demonstrate that this test model can be used to study platelet inhibition and activation in a convenient, operator-friendly, and dynamic manner. The image analysis of the adhered platelets on the TE constructs, and assessment of platelet aggregation in liquid phase using aggregometry, allows the extraction of both qualitative and quantitative datasets to assess ex vivo human thrombus formation. This study demonstrated that ketamine exerted a strong negative effect on platelet aggregation and activation. This corresponds well with previous in vitro and in vivo studies that have investigated the underlying mechanisms of ketamine's inhibition of platelet function [6]. In our experiments, we observed that ketamine almost completely inhibits thrombus formation upon the TEML construct. It is possible that this effect is caused by the inhibition of dense granule secretion by ketamine, as autocrine signalling molecules released from here are known to be crucial to the recruitment of circulating platelets onto the surface of forming thrombi [7,39]. This inhibitory effect could artefactually alter the size and structure of thrombi seen in current animal thrombosis models, potentially leading to an overestimation of the effectiveness of putative anti-platelet therapies. This is consistent with previous findings that show that the use of different anaesthetics differentially impacts the efficacy of integrin αIIbβ3 blockers in murine thrombosis models [39], thus providing initial evidence that our model system is a potential alternative to current in vivo studies. A more detailed side-by-side comparison examining the impact of different anaesthetics on the thrombotic response in current in vivo models, and in the TEBV model presented here, will be required to fully validate the model system. These findings highlight the value of using tissue-engineered human blood vessels for drug testing. By using human cells and eliminating the need for anaesthesia, we should be able to accurately model the processes of haemostasis and vascular repair to improve the translational potential of any findings. Additional advantages of our model include a reduction in the use of animal thrombosis models, elimination of the need for costly intravital microscopy equipment, and a lower cost of TE construct production compared to housing mouse colonies. We also used the TEBV to demonstrate that atorvastatin enhances recruitment and attachment of perfused EPCs. Although previous studies have demonstrated that atorvastatin increases circulating numbers of EPCs [19], there has been limited study on its ability to modulate EPC recruitment to the damaged vascular wall, which may partly underlie the beneficial effects of this drug in preventing acute cardiovascular events. Possible mechanisms by which atorvastatin might enhance EPC recruitment to the damaged TEBV is via increased bioavailability of NO [40] or activation of matrix metalloproteinase-2 and 9 (MMP2 and MMP9) [41]. It is also possible that the SDF-1 CXCR4 axis is also involved in recruiting EPCs to sites of vascular injury. This theory is supported by the findings by Luo et al., 2018 [42]. They found that NO production was induced by SDF-1, which triggers multiple signalling pathways, resulting in chemokine-induced changes of the EPC cytoskeleton leading to enhanced cell migration [42]. The value of our models is that various doses/concentrations of different drugs, as well as combinations of drugs, can be tested faster than conventional models and also allows for real-time monitoring without measures such as anaesthesia being needed. The ease of assembly also makes it possible to combine multiple cell types and can be adapted for any species. In comparison to other in vitro models [43,44], our models also allow for real-time visualisation of cell attachment due to the nature of the perfusion chamber used, containing both a top and bottom window, as well as the open-faced nature of the models themselves. Conclusions The layer-by-layer assembly of human blood vessel models provides a convenient and reliable research tool to investigate the interaction of blood components, such as platelets and circulated progenitor cells, with a blood vessel. This provides a simple method to assess the impact of a variety of drug interactions on haemostasis and vascular repair. The parallel-plate flow chamber, plus the adjustable dimensions of the PDMS gasket, enabled incorporation of 3D tissues into the chamber, permitting their exposure to perfusion with blood components at different physiological shear stresses. The labelled cells allow the monitoring of cellular activation and adhesion in real-time. The intact, confluent intimal layer can inhibit platelet activation, whilst the partially formed intima did not. The medial layer with newly formed collagen triggered platelet aggregation, whilst collagen gel made from rat skin collagen type I did not. The presence of a similar concentration of ketamine used in current in vivo thrombosis models inhibited the ability of human platelets to adhere to the TEML surface. The tissue-engineered vessels could also be used to demonstrate that atorvastatin is able to enhance the homing capabilities of EPCs by improving their ability to adhere to the damaged tissue-engineered blood vessel under flow conditions. Here we show that the combination of tissue-engineered arterial constructs and a parallel-plate flow chamber can be used to effectively simulate the haemostatic and vascular repair processes ex vivo. Therefore, these results indicate that this model system is able to provide a potential alternative to in vivo testing models. This will also permit us to test the predicted mechanisms of action of a selected anti-thrombotic drug without the need for animal models and may be modified further to simulate other clinical conditions.
v3-fos-license
2022-05-10T16:47:03.001Z
2022-05-01T00:00:00.000
248613326
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/9/5038/pdf?version=1651827223", "pdf_hash": "f5ef6435b883681e2fd8737f488bbdf3f9409872", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41760", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2261eebeff1ce65e9e7bcd1aace27730b1a93098", "year": 2022 }
pes2o/s2orc
Association of Ligamentum Flavum Hypertrophy with Adolescent Idiopathic Scoliosis Progression—Comparative Microarray Gene Expression Analysis The role of the ligamentum flavum (LF) in the pathogenesis of adolescent idiopathic scoliosis (AIS) is not well understood. Using magnetic resonance imaging (MRI), we investigated the degrees of LF hypertrophy in 18 patients without scoliosis and on the convex and concave sides of the apex of the curvature in 22 patients with AIS. Next, gene expression was compared among neutral vertebral LF and LF on the convex and concave sides of the apex of the curvature in patients with AIS. Histological and microarray analyses of the LF were compared among neutral vertebrae (control) and the LF on the apex of the curvatures. The mean area of LF in the without scoliosis, apical concave, and convex with scoliosis groups was 10.5, 13.5, and 20.3 mm2, respectively. There were significant differences among the three groups (p < 0.05). Histological analysis showed that the ratio of fibers (Collagen/Elastic) was significantly increased on the convex side compared to the concave side (p < 0.05). Microarray analysis showed that ERC2 and MAFB showed significantly increased gene expression on the convex side compared with those of the concave side and the neutral vertebral LF cells. These genes were significantly associated with increased expression of collagen by LF cells (p < 0.05). LF hypertrophy was identified in scoliosis patients, and the convex side was significantly more hypertrophic than that of the concave side. ERC2 and MAFB genes were associated with LF hypertrophy in patients with AIS. These phenomena are likely to be associated with the progression of scoliosis. Introduction Adolescent idiopathic scoliosis (AIS) is a three-dimensional spine curvature that progresses during the pubertal growth spurt [1]. The prevalence of AIS is approximately 0.5-4% of the population, and it affects girls more than boys [1]. Progression of the curvature and the associated deformity of the spine may lead to respiratory dysfunction [2] and back pain [3]. If the scoliotic curvature progresses > 25 • in skeletally immature patients, treatment is warranted, starting with bracing. Surgical treatment is usually required at ≥40-50 • scoliotic curvature to halt the progression of scoliosis with its associated problems of deterioration of self-image, deformation of the rib cage, and back pain [4]. Both treatments can be problematic; bracing can cause skin irritation, a temporary decrease in vital capacity, mild pain in the chest wall, and inferior rib deformation [5]. Although surgical treatment generally has an acceptable complication rate, scarring, neurological deterioration, major vascular injury, and deep wound infections can occur [6]. There are no prophylactic measures available for AIS. The establishment of a treatment for scoliosis based on its pathogenesis is expected to have enormous benefits for many patients in the future. The pathogenesis of AIS is still not understood, although recent studies have reported the involved genes genome-wide association analysis in patients with AIS [7,8]. However, the understanding of the genetic variants related to the progression of AIS has not led to clinical relevance [9]. ScoliScore is a prognostic genetic test designed to evaluate the risk of curve progression in skeletally immature patients with AIS with Cobb angles of 10 • to 25 • [10]. There was not any association between the single nucleotide polymorphisms (SNP) used in ScoliScore and curve progression or curve occurrence in the French-Canadian population, although there are differences in race [11]. Clinical applications do not always arise from genetic associations. Other factors which may be related to the pathogenesis of AIS, such as leptin levels [12], spinal cord tethering [13], abnormal somatosensory evoked potentials [14], asymmetrical loading of the posterior parts of the vertebrae [15], osteopenia [16], melatonin-signal dysfunction [17], and insufficient serum vitamin D levels [18] have been reported. Considering the above, the pathogenesis of AIS has been considered to be multifactorial [19]. The ligamentum flavum (LF) is anatomically positioned in the posterior column of the spine, which covers the posterior aspect of the dura mater. LF hypertrophy is observed in aging [20], obesity [21], and mechanical stress [22] and can result in lumbar spinal stenosis [23]. Although dorsal shear force of the spine is one of the suspected causes of AIS [15], its mechanism is not clear. Shortening of the spinal canal is observed in scoliotic spines [24], leading to the hypothesis that posterior spinal elements tether and cause lordosis and rotation of the spine because the anterior parts of the vertebral bodies continue to grow [24]. Biomechanical analysis suggested that the synergistic effects of, not only the compressive force of the expanding anterior vertebral body resulting in a force driving the apical vertebral body out of midline, but also the tension forces on the posterior column resulting in a force keeping the spinal posterior elements in the normal position, play important roles in the progression of scoliosis [25]. There are few reports of the role played by the LF in patients with AIS. The purpose of this study was to identify the pathogenesis of AIS progression, focusing on the LF using microarray and gene profiling with the goal of elucidating the mechanism of scoliosis progression, which may point to new prognostic and treatment methods. Subjects A total of 22 patients with AIS were enrolled in this study. These 22 patients with AIS (20 females, 2 males; mean patient age 14.2 years) underwent posterior spinal fusion and correction at our institution from 2017 to 2019 (Table 1). According to the classification of Lenke et al. [26], Lenke Type 1 and 2 patients numbered 16 and 6, respectively. The inclusion criteria were patients who had undergone AIS surgery and follow-up at our hospital. All patients were given a thoracic MRI for back pain or for checking neurological abnormality. Additionally, 18 adolescent patients who had no scoliosis and had been given a thoracic spine MRI for another reason were selected as controls (12 females, 6 males; mean patient age 14.8). The reasons for thoracic MRI were: 6 chronic back pain, 2 acute back pain, 3 scapular pain, 3 leg numbness, 1 leg motor weakness, 1 atrophy of leg muscle, 1 chronic hypochondrium pain, and 1 steppage gait. Prior approval for the study was obtained from the Ethical Review Board of the University of Toyama (consent no. I2013004 ). Informed consent to participate in this study and consent to instrumentation were obtained before surgery. MRI Imaging LF thickness of scoliosis was assessed at the facet joint level on an axial view of T2weighted MRI as previously described [27,28]. The cross-sectional area of LF was a more sensitive measurement than the thickness of LF [27]. In order to analyze as comprehensively as possible, we measured the thickening of the LF using the cross-sectional area by MRI. The 18 control subjects were selected from the same age group who had taken thoracic spinal MRI images for other diseases such as back pain and had no scoliotic curvature. All control patients were measured at the cross-sectional area of LF in axial view between T7/8 and T9/10. Additionally, 22 scoliosis patients were evaluated by MRI for surgery on a Lenke type 1 or type 2 thoracic curve, as shown in Table 1. The level of measurement was decided by the apex of the main thoracic curvature, and the cross-sectional area of LF on both the concave and convex sides was measured and analyzed separately. The measurement was performed using commercial software (Synapse VINCENT, Fujifilm, Tokyo, Japan). The value was measured three times for each patient by one experienced spine surgeon, and the mean value was considered the LF thickness ( Figure 1). Isolation of Human LF cells from AIS Patients LF tissues were obtained from scoliotic patients who had undergone surgery at Toyama University Hospital. This study was approved by the Ethics Review Committee of our institution (approval no.: I2013004). Written informed consent was obtained from all patients before the collection of specimens. LF was cut into small fragments and submitted to sequential enzymatic digestion with collagenase. The cells were then filtered through a nylon mesh with a pore diameter of 70 µm (Corning Gilbert, Glendale, AZ, USA), centrifuged at 1600 rpm, and the supernatant was discarded. The resulting pellet was washed 2 times with phosphate-buffered saline. Cells were cultured in a 10 cm culture dish with Dulbecco's Modified Eagle Medium under consistent culture conditions (37 • C, 5% CO 2 ). Medium was supplemented with 10% Fetal Bovine Serum and 1% of an antibiotic mixture (100 U/mL penicillin, 100 mg/mL streptomycin). Medium was replaced every week. For subcultivation, the cells were detached with trypsin/ethylenediaminetetraacetic acid and expanded. qPCR Total RNA was prepared from LF cells or tissues using Isogen (Nippon Gene, Tokyo, Japan). Total RNA extraction and cDNA synthesis was carried out (PureLink RNA Mini Kit, Invitrogen, Waltham, MA, USA) and High-Capacity RNA-to-cDNA Kit (Invitrogen, MA, USA) by using a Thermal cycler, respectively. Extracted RNA was measured by Nano Drop One Spectrophotometer (Invitrogen, Waltham, MA, USA). Gene expression analysis was carried out (iTaq Universal SYBR Green Supermix, Bio-Rad, Hercules, CA, USA) based qPCR reaction on the CFX Connect (Bio-Rad, Hercules, CA, USA). All procedures were performed according to each manufacturer's protocol. Data were normalized to those of GAPDH mRNA, and relative gene expression of LF tissues and cells was determined using ∆∆Ct method. Hematoxylin-Eosin, Azan, Elastica Van Gieson Staining, and Immunohistochemistry LF tissues were fixed in 10% formalin and embedded in paraffin. Tissue sections (thickness: 4 µm) mounted on glass microscope slides were stained with HE, Azan, or EVG. Deparaffinized sections were stained with hematoxylin-eosin, Eosin Y solution (Wako, Osaka, Japan), to analyze LF tissue. After dewaxing, the sections were stained using hematoxylin and eosin (H and E) staining. For Azan staining, slides were incubated in prewarmed azocarmine G (40012, Muto Pure Chemicals, Tokyo, Japan) for 2 h, washed with distilled water, differentiated with aniline alcohol (40021, Muto Pure Chemicals Co., Ltd., Tokyo, Japan), briefly washed with acetic alcohol and distilled water, incubated in 5% phosphotungstic acid (40041, Muto Pure Chemicals Co., Ltd., Tokyo, Japan) for 2 h, briefly washed with distilled water, incubated in aniline blue/orange G solution (40052, Muto Pure Chemicals Co., Ltd., Tokyo, Japan) for 1 h, and then differentiated using 100% alcohol. For EVG staining, slides were stained with Weigert's resorcin Fuchsin solution (233-01655, Wako, Osaka, Japan) for 1 h, washed with 100% alcohol, the nuclei stained with Weigert's iron hematoxylin solution (298-21741, Wako, Osaka, Japan) for 10 min, washed with water, and then stained with Van Gieson solution F (221-01415, Wako, Osaka, Japan) for 3 min, and washed with 70% alcohol. After staining, slides were dehydrated, cleared, and mounted with a mounting agent for microscopy. We performed the method used to quantify the ratio of collagen fibers to elastic fibers as previously described [29]. To calculate the area of collagen and elastic fibers after EVG staining, we used the ImageJ software program (National Institutes of Health, Bethesda, MD, USA). As shown in Supplementary Figure S1, in the case of elastic fiber, after converting EVG staining to 8-bit grayscale, the area of the gray-stained part was calculated using ImageJ. In the case of collagen fiber, the area of the grayscale image was calculated by ImageJ after black-and-white inversion. A total of three EVG stains with a cross-section perpendicular to the elastic fiber were selected and measured three times to calculate the average value. Microarray Analysis Total RNA was extracted from the cells using a kit (PureLink RNA Mini Kit, Invitrogen). After RNA was qualified (Agilent 2100 Bioanalyzer), Cy3-labeled cRNA was synthesized from 50 ng of total RNA using a kit (Low Input Quick-Amp Labeling Kit, One-color, Agilent Technologies) and purified using another kit (RNeasy Mini Kit (Qiagen, Hilden, Germany). The concentration of amplified cRNA and dye incorporation was quantified using spectrophotometry (NanoDropOne spectrophotometer, Thermo Fischer) and hybridized using a kit (SurePrint G3 Human Gene Expression v3 8 × 60K Microarray Kit Design ID:072363, Agilent Technologies). After hybridization, arrays were washed consecutively (Gene Expression Wash Pack, Agilent Technologies). Fluorescence images of the hybridized arrays were scanned (SureScan Microarray Scanner, Agilent Technologies), and the scanned data were extracted with commercial software (Feature Extraction software ver. 12.1.1.1, Agilent Technologies). The raw microarray data are deposited in the National Center for Biotechnology Information Gene Expression Omnibus (GEO Series GSE85226). Gene expression analysis was performed (GeneSpring GX 14.9.1, Agilent Technologies). Each measurement was divided by the 75th percentile of all measurements in that sample at per chip normalization. The genes filtered by flags were detected in all samples and were subjected to further analyses. Statistical Analysis Data were compared by Mann-Whitney U-test, Pearson's correlation coefficient, and Student's t-tests were used for the statistical analysis using commercial software (JMP ® version 9, SAS Institute Inc., Cary, NC, USA). p-values < 0.05 were considered statistically significant. Results We first measured the thicknesses of the LF in the apex of thoracic AIS curvatures in the convex and concave side with MRI in patients and compared them with LF in non-scoliotic patients. Next, transcriptional profiling, microarray, Western blotting, and histological analyses of LF were performed. Measurements of LF Hypertrophy in Patients with AIS We first measured the cross-sectional area of the LF at the apex of the main thoracic curvature on the convex and concave aspects in Lenke type 1 or 2 patients with AIS, as shown in Figure 1a,b [27,28]. Figure 1c shows the surgical specimens of convex and concave apical LFs from a Ponte osteotomy [30]. The LF of the convex side was more hypertrophic than the LF of the concave side. There was also a significant difference (p < 0.05) between the non-scoliotic group and scoliotic LF thicknesses, as shown in Figure 1d. Next, we investigated whether convex LF hypertrophy occurred in periapical areas. Figure 1e shows that the LF thickness of apex−1 (cranial to the apex), apex, and apex+1 (caudal to the apex) was significantly more hypertrophic than that of the concave side but was not as thick as the convex side at the apex of the curvature (p < 0.05). Histological Analysis of LF Hypertrophy with Comparison between the Concave and Convex Sides Next, we investigated the cellular properties of LF hypertrophy on the convex side occurring in periapical regions. Hematoxylin-eosin (HE), Azan, and Elastica van Gieson (EVG) staining were performed on LFs from both sides (Figure 2). LF of the convex side appeared to have a slightly denser fibrous structure than that of the concave side in HE staining (Figure 2a). Azan staining showed many areas that were strongly blue-stained on the convex side, suggesting an increase in collagen fibers (Figure 2b). EVG stain showed that the elastic fibers were sparse on the convex side because it was more disturbed on the convex than on the concave side ( Figure 2b). According to an analysis of EVG stain, the ratio of fibers (Collagen/Elastic) was significantly increased on the convex side compared to the concave side (p < 0.01), indicating that collagen fiber density was increased and elastic fibers were not changed much on the convex side. Quantitative PCR (qPCR) Analysis of LF Next, we performed gene expression analysis of LF hypertrophy on the convex and concave sides occurring in the periapical regions. Several types of collagen (type I, II, III, and X), cytokines such as transforming growth factor (TGF)-beta1, Interleukin (IL)-1, IL-6, tissue necrosis factor (TNF)-alfa, matrix metalloproteases (MMPs) such as MMP2, and growth factors such as connective tissue growth factor (CTGF) and platelet-derived growth factor (PDGF) have been reported as factors involved in LF hypertrophy [28,29]. LF tissues of neutral vertebrae (Control), periapical concave, and convex side were removed to release intervertebral space. The expression analysis of these tissues was performed with quantitative PCR (qPCR). Extracellular matrix (ECM) protein of the periapical concave and convex sides was significantly increased compared to that of control, as shown in Figure 3 (p < 0.05). Furthermore, the convex side of ECM protein was significantly increased compared to the concave side (p < 0.05). IL-6, TGF-beta1 and CCN1 showed same expression pattern (p < 0.05). Gene Expression Profiling of LF between Concave and Convex Side Next, we compared gene expression profiling between the concave and convex sides of the apical region using microarray analysis of these tissues. Heatmap analysis (Figure 4a) showed relatively close gene profiling between the concave and convex aspects. Scatterplot analysis (Figure 4b) found 11 genes with different expression levels. Next, we used Western blots to confirm whether these genes were really different in expression level, even at the protein level. This revealed that the expression level of the protein in ERC2 (ELKS/RAB6-Interacting/CAST Family Member 2) was clearly different between the concave and convex sides (Figure 5a). Immunohistochemical analysis showed the ERC2 protein was abundant on the convex side, localized in the cytoplasm (Figure 5b). We further investigated why the increased expression of ERC2 was associated with LF hypertrophy on the convex side. The LF cells from controls were isolated and used with experimental study. We generated overexpression vectors of ERC2, and transfected them into isolated LF cells. COL1A2, COL2A1, COL3A1, IL-6, and TGF-beta1 were significantly increased in the presence of elevated ERC2 in LF cells. (Figure 5c). Overexpression of ERC2, as well as of COL1A2, COL2A1, COL3A1, IL-6, and TGF-beta1 resulted (Figure 5c), suggesting that overexpression of ERC2 may lead to LF hypertrophy on the convex side of the curvature. Gene Expression Profiling of LF between Controls and the Convex Side of the Scoliotic Curvature Next, we compared gene expression profiling between controls and the convex side. Microarray analysis with a heatmap (Figure 6a) showed relatively close gene profiles. Scatterplot analysis (Figure 6b) revealed 47 genes with significantly different expression levels. Western blots revealed that the expression levels of the gene product of V-maf musculoaponeurotic fibrosarcoma oncogene homolog B (MAFB) were clearly different among control, concave, and convex side LF tissues (Figure 7a), with the highest expression by the tissues on the convex side. Immunohistochemical analysis showed that MAFB appeared to be expressed in a cell population clustered around the fissures of the LF (Figure 7b). MAFB protein was abundant in the cytoplasm, especially in larger cells (Figure 7b). To investigate why increased expression of MAFB causes hypertrophy, we generated overexpression vectors of MAFB and transfected them into isolated LF cells. This resulted in increased expression of IL-6 (Figure 7c), while TGF-beta1 expression was unaffected. To further investigate the mechanism of hypertrophy on the convex side of the curvature, we performed an expression profile of the collagens by the simultaneous addition of IL-6 and TGF-beta1. By adding these cytokines to control LF cells, COL1A2 and COL3A1 were significantly increased (Figure 7d). These phenomena suggest that TGF-beta1 and IL-6 may also lead to the hypertrophy of LF on the convex side of the curvature in patients with AIS. Discussion Although LF hypertrophy is seen in aging, we found a significant hypertrophic change in LF around the scoliotic curvature in patients with AIS. There were significant differences in LF hypertrophy between the concave and convex sides of the curvature. These phenomena suggest that the convex side of the curvature is exposed to stronger mechanical stress compared to that of the concave side. It has been reported that mechanical stress was considered a cause of LF hypertrophy based on computed tomographic, histological, and collagen content analyses of 161 lumbar LFs [22]. There has also been a report on the differences in mechanical stress between the convex and concave sides (6593.02 vs. 5022.48) in finite element analysis of an animal scoliotic model, with the mechanical stress on the convex side increased (31.27%) in the vertical direction of the vertebral body [31]. Another study found a difference in mean compressive stresses between the concave and convex sides of the scoliotic curves ranging between 0.1 and 0.2 MPa and showed that tension force was the strongest in the posterior aspect of the convex side [32]. The tension force in the posterior convex side may be related to the hypertrophic changes in the LF here. According to pathological analysis of LF hypertrophy in elderly patients, there is a loss of elastic fibers and an increase in collagenous fibers [33]. Similarly, in an animal model of LF hypertrophy, a decrease in elastic fibers and an increase in cartilaginous tissue and collagen fiber were observed [29,34]. It has been reported that the LF hypertrophic change was associated with the increased expression of TGF-beta1 [20,29]. In addition, IL-6 mRNA and protein expression was increased in the hypertrophied LF in patients with lumbar spinal stenosis [35,36]. Other factors possibly involved in hypertrophic change of LF tissue include CTGF [29], PDGF [29], MMP2 [36], CCN5 [37], VEGF [38], and TIMP2 [39]. These cytokines are thought to be related to the TGF-beta pathway, LF tissue degradation, or its inhibition [40]. Our data showed a significant decrease in elastic fibers and an increase in collagenous tissue with EVG and Azan stain. The mRNA expression analysis of qPCR in LF tissue showed significantly increasing TGF-beta1, IL-6, and CCN1 in LF of the convex side compared with the control and concave side. It was reported that CCN1 was expressed by TGF-beta1 and enhanced TGF-beta1/SMAD3-dependent profibrotic signaling in fibroblasts [41]. These phenomena suggested that these cytokines may be associated with inducing a hypertrophic change of LF. Next, we investigated the gene expression profiling between the concave and convex sides and between the control and convex side because we would like to know more about the factors associated with the hypertrophic changes of LF. We identified two genes associated with the LF hypertrophy, ERC2 and MAFB, by microarray analysis and Western blotting. The ERC2 gene is highly expressed in the nervous system and is also widely expressed in skeletal muscle and ovaries [42,43]. Members of this protein family form part of the cytomatrix at the presynaptic active zone complex and function as regulators of neurotransmitter release [42,43]. However, there have not been any reports associating LF and ERC2 as far as we investigated. At the same time, there have not been any reports of overexpression of ERC2 inducing TGF-beta1 expression or these collagens. This may be one of the other features of ERC2 that have not been reported so far. MAFB, which is widely expressed in pancreatic α cells, renal podocytes, epidermal keratinocytes, hair follicles, and hematopoietic stem cells, also functions in embryonic urethral formation [44]. The protein encoded by this gene is a basic leucine zipper transcription factor that plays an important role in the regulation of lineage-specific hematopoiesis [44]. Additionally, MAFB protein is expressed in the tissue of Dupuytren's cord [45], and MAFB and Sox9 form a positive feedback loop that maintains cell stemness and tumor growth in vitro and in vivo [46]. MAFB expression is induced by IL-10, IL-4, and IL-13 [44,47]. Our data showed overexpressed MAFB in LF cells highly induced IL-6. To the best of our knowledge, there have not been any reports of MAFB directly inducing IL-6. From our experiments, overexpression of MAFB did not directly increase the expression of collagens in LF cells (data not shown). However, increased expression of collagens was observed with the simultaneous addition of TGF-beta1 and IL-6. These facts suggest that MAFB may be involved in the ligamentum flavum hypertrophy through elevated IL-6 expression. Considering the progression of scoliosis (Figure 8), firstly, the LF hypertrophy may further progress through the expression of cytokines such as IL-6 and TGF-beta1 due to the increased expression of ERC2 and MAFB in response to mechanical stress. Scoliosis may progress because hypertrophy of the LF, which is the supporting tissue of the posterior spine, induces an imbalance of the growth between the vertebral body and the posterior column, including pedicles, facets, and spinous processes. AIS likely progresses, inducing the overgrowth of the anterior vertebral body by keeping tethering posterior spinal elements due to LF hypertrophy. Conclusions ERC2 and MAFB genes were associated with LF hypertrophy through increasing TGF-beta1 and IL-6 in patients with AIS. Funding: This research was supported by Japanese governmental grants-aid-for scientists (B) 20H03801. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the University of Toyama (consent no. I2013004 ). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. This study was approved by the Ethical Review Board of the University of Toyama (consent no. I2013004 ). Data Availability Statement: The data presented in this study are available on request from the corresponding author. Acknowledgments: The authors thank the technical assistant of the University of Toyama for experiments in genetic and cellular analysis. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2016-06-17T02:50:26.036Z
2013-12-25T00:00:00.000
16729973
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2013.00463/pdf", "pdf_hash": "99d9e3da3579ad915b86a242fdc7c5db5de90fa1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41763", "s2fieldsofstudy": [ "Biology" ], "sha1": "99d9e3da3579ad915b86a242fdc7c5db5de90fa1", "year": 2013 }
pes2o/s2orc
Mother and Child T Cell Receptor Repertoires: Deep Profiling Study The relationship between maternal and child immunity has been actively studied in the context of complications during pregnancy, autoimmune diseases, and haploidentical transplantation of hematopoietic stem cells and solid organs. Here, we have for the first time used high-throughput Illumina HiSeq sequencing to perform deep quantitative profiling of T cell receptor (TCR) repertoires for peripheral blood samples of three mothers and their six children. Advanced technology allowed accurate identification of 5 × 105 to 2 × 106 TCR beta clonotypes per individual. We performed comparative analysis of these TCR repertoires with the aim of revealing characteristic features that distinguish related mother-child pairs, such as relative TCR beta variable segment usage frequency and relative overlap of TCR beta complementarity-determining region 3 (CDR3) repertoires. We show that thymic selection essentially and similarly shapes the initial output of the TCR recombination machinery in both related and unrelated pairs, with minor effect from inherited differences. The achieved depth of TCR profiling also allowed us to test the hypothesis that mature T cells transferred across the placenta during pregnancy can expand and persist as functional microchimeric clones in their new host, using characteristic TCR beta CDR3 variants as clonal identifiers. In recent years, the potential of next-generation sequencing (NGS) to reveal the full complexity of human and mouse immune receptor repertoires has inspired numerous efforts to develop optimal techniques for achieving large-scale T cell receptor (TCR) and antibody profiling (8)(9)(10)(11)(12) and to decipher various aspects of adaptive immunity (8,9,11,(13)(14)(15)(16)(17). With appropriate library preparation methods (18), NGS techniques now make it possible to perform quantitative analysis of hundreds of thousands or millions of distinct TCR beta complementarity-determining region 3 (CDR3) variants. This individual diversity of TCR beta CDR3 variants, which is generated in the course of V-D-J recombination and the random addition and deletion of nucleotides in the thymus, largely determines the whole diversity of naïve T cells and specificity of T cell immune responses (19,20). In the present study, we have used deep NGS profiling to compare TCR beta repertoires of mothers and their children. We achieved a profiling depth of 500,000-2,000,000 unique TCR beta CDR3 clonotypes per donor, and performed comparative analysis with the aim of revealing specific features of TCR repertoires that distinguish related mother-child pairs from unrelated individuals, and how these familial repertoires manifest the influence of inherited factors, such as the elements of TCR recombination machinery and human leukocyte antigens (HLA). By comparing out-of-frame (i.e., non-functional and thus not subjected to selection) and in-frame TCR beta repertoires, we also show the extent of the impact of thymic selection and the common trends in how this process shapes individual repertoires. Additionally, the profiling depth that we achieved allowed us to look for the potential presence of maternal or fetal microchimeric T cell clones that may have transmigrated through the placenta as mature α/β T cells and which subsequently persist in both related donors, by using characteristic TCR beta CDR3 variants as clonal identifiers. SAMPLE COLLECTION This study was approved by the ethical committee of the Federal Scientific Clinical Center of Pediatric Hematology, Oncology, and Immunology. Blood donors provided informed consent prior to participating in the study. Ten milliliters of peripheral blood samples were obtained from nine systemically healthy Caucasian donors: three mothers (average age 40 ± 4 years) and their six children (average age 11 ± 4 years). Peripheral blood mononuclear cells (PBMCs) were isolated by Ficoll-Paque (Paneco, Russia) density gradient centrifugation. Total RNA was isolated with Trizol (Invitrogen, USA) in accordance with the manufacturer's protocol. CONTAMINATION PRECAUTIONS T cell receptor beta libraries were generated in clean PCR hoods with laminar flow, using reagents of high purity and pipette tips with hydrophobic filters. As an additional precaution, we generated the TCR beta libraries for the two groups being comparedmothers and their children -a month apart, and sequenced the two libraries in two separate Illumina runs to guarantee the absence of inter-library contamination during amplification or on the solid phase of the sequencer. PREPARING cDNA LIBRARIES FOR QUANTITATIVE TCR BETA PROFILING cDNA-based library preparation was performed essentially as described previously (9,12,16,18,21,22). Briefly, we used the Mint kit (Evrogen, Russia) for first-strand cDNA synthesis. For each donor sample, the whole amount of extracted RNA was used for cDNA synthesis, with 1.5 µg of RNA per 15 µl reaction volume. We incubated the mixture of RNA and priming oligonucleotide BC_R4_short (GTATCTGGAGTCATTGA), which is specific to both variants of the human TCR beta constant (TRBC) segment, at 70°C for 2 min and 42°C for 2 min for annealing. We then added the 5 -adapter for the template switch. The reaction was carried out at 42°C for 2 h, with 5 µl of IP solution added after the first 40 min. Further cDNA library amplification was performed in two sequential PCRs using Encyclo PCR mix (Evrogen). To capture the maximum number of input cDNA molecules, we used the whole amount of synthesized cDNA for the first PCR amplification. The first PCR totaled 18 cycles with universal primers M1SS (AAGCAGTGGTATCAACGCA) and BC2R (TGCTTCT-GATGGCTCAAACAC), which are respectively specific to the 5adapter and a nested region of the TRBC segments. The primer annealing temperature was set at 62°C. The products of the first PCR were combined, and a 100-µl aliquot was purified by QIAquick PCR purification Kit (Qiagen) and eluted by 20 µl of EB buffer. The second PCR amplification was performed for 8-10 cycles with a mix of TCR beta joining (TRBJ)-specific primers and the universal primer M1S ((N) 2-4 (XXXXX)CAGTGGTATCAACGCA GAG), which is specific to the 5 -adapter and is nested relative to the M1SS primer used in the first PCR amplification. XXXXX represents a sample barcode introduced in the second PCR, and (N) 2-4 are random nucleotides that were added in order to generate diversity for better cluster identification during Illumina sequencing. Primer annealing temperature was set at 62°C. ILLUMINA HiSeq SEQUENCING PCR products carrying pre-introduced sample barcodes were mixed together in equal ratio for each of the two groups (mothers and children). Illumina adapters were ligated according to the manufacturer's protocol using NEBNext DNA Library Prep Master Mix Set for Illumina (New England Biolabs, USA). Generated libraries were analyzed using two separate Illumina HiSeq 2000 lanes in separate runs with 100 + 100 nt paired end sequencing using Illumina sequencing primers. Raw sequences deposited in NCBI SRA database (PRJNA229070). NGS DATA ANALYSIS TCR beta variable (TRBV) segment identification [using IMGT nomenclature (23)], CDR3 identification (based on the sequence between conserved Cys-104 and Phe-118, inclusive), clonotype clusterization and correction of reverse transcription, PCR, and sequencing errors were performed using our MiTCR software (24) 1 . The sequencing quality threshold of each nucleotide within the CDR3 region was set as Phred >25, with low-quality sequence rescue by mapping to high-quality clonotypes. The strictest "eliminate these errors" correction algorithm was employed to eliminate the maximal number of accumulated PCR and sequencing errors. STATISTICAL ANALYSIS We used Jensen-Shannon divergence (JS), which is a symmetrized version of the Kullback-Leibler divergence (KL), to quantify the similarity between the clonotype TRBV gene usage distribution in related and unrelated mother-child pairs. JS and KL are defined as follows (25): Where P and Q correspond to the TRBV gene segment frequency distributions of the two individuals being analyzed, and p i and q i stand for the frequency of a particular TRBV gene segment in the first and second individual, correspondingly. For statistical comparison of the JS among related and unrelated mother-child pairs, we used two-tailed, unpaired Student's t -test with P-values <0.05 considered significant. To account for multiple testing, Bonferroni-corrected P-values were used. We used linear regression to analyze dependency between TRBV-CDR3/CDR3 overlap ratio and the number of shared major histocompatibility complex I (MHC-I) alleles, and calculated the Pearson correlation coefficient. The linear model: was fit using the least-squares method. Linear regression and correlation analysis were performed using R programing language 2 . HLA TYPING The samples were HLA-typed using SSP AllSet Gold HLA-ABC Low Res Kit and SSP AllSet Gold HLA-DRDQ Low Res Kit (Invitrogen) and results were processed using UniMatch software. RESULTS We obtained at least 1 × 10 7 TCR beta CDR3-containing sequencing reads for each mother and about 3 × 10 6 reads for each child. MiTCR software analysis yielded 500,000-2,000,000 distinct TCR beta CDR3 clonotypes per donor ( Table 1) -representing a significant portion of the total TCR beta diversity for an individual, which lower bound estimate constitutes~4 million (8). We then subjected these individual TCR beta datasets to comparative analysis in an effort to identify features that distinguish TCR beta repertoires of related mother-child pairs. TRBV GENE USAGE We analyzed the relative usage of TRBV gene segments in mother-child pairs at three levels (see Figure 1): Out-of-frame TCR beta variants The influence of genetic effects on the recombination machinery, which determines the relative frequencies of TRBV gene segment usage in TCRs generated before selection in the thymus, should be reflected by out-of-frame TCR variants that are not subjected to the pressure of further selective processes. Due to nonsensemediated decay mechanisms, RNA-based libraries generally contain a low percentage of out-of-frame TCR beta variants (9,12,26,27). Nevertheless, out-of-frame CDR3 sequences constituted 2.5% of all clonotypes (Table 1) -16,048-45,300 clonotypes per donor -which is sufficiently abundant to perform statistical analysis. These subsets were used to compare TRBV gene segment usage in related and unrelated mother-child pairs before thymic selection. At this level of out-of-frame non-functional TCR beta variants, Jensen-Shannon divergence in TRBV gene usage was comparable for related and unrelated mother-child pairs, albeit with a non-significant increase in divergence for the latter (Figure 2A; Figures 3A,B, first 2 bars). Low-frequency in-frame clonotypes The pressure of thymic selection can be tracked by comparing TRBV gene segment usage in out-of-frame TCR beta variants relative to those variants represented in naïve T cells. In this work, we did not perform separate TCR profiling of FACS-sorted naïve T cells. We aimed to achieve maximal depth of analysis, and sought to avoid the loss of cells and RNA and general quantitative biases that inevitably arise from the cell sorting process. We estimated the pool of TCR beta clonotypes that predominantly belong to the naïve subset as follows. We used FACS analysis to identify the percentage of naïve CD27 high CD45RA high CD3+ T cells for each donor (28). This analysis demonstrated that naïve T cells constitute 40-73% of the T cell population in children and 27-55% of the T cell population in mothers (Table 1; Figure 1). Since each naïve T cell clone is usually represented by minor numbers of TCR-identical cells in an individual (29), for the purposes of bulk analysis, we hypothesized that the subset of the low-frequency clonotypes that occupies the same share of homeostatic space as the FACS-determined share of naïve T cells for that particular donor (433,293-1,797,650 clonotypes per donor) predominantly includes naïve T cells. At this level of low-frequency, in-frame TCR beta clonotypes, TRBV gene segment usage was significantly less divergent compared to out-of-frame TCR beta variants, both in related and unrelated pairs (Figures 2B and 3). Additionally, TRBV gene segment usage was significantly more similar for related versus unrelated pairs (Figures 3A,B, bars 3, 4). In accordance with JS analysis, comparison within related triplets revealed equalization of the usage of particular TRBV gene segments in lowfrequency, in-frame TCR clonotypes compared to out-of-frame TCR variants (Figure 4). For example, in each triplet, we saw the usage of TRBV gene segments 12-3, 12-4, 20-1, 21-1, and 23-1 equalize in the low-frequency TCR beta clonotypes pool. We also observed an equalizing decrease in TRBV 7-3 usage in triplets A and C, and an equalizing increase in TRBV 28 usage in triplet B. Notably, the observed changes in TRBV gene segments usage were generally similar in different unrelated donors (compare Figures 4A-C), and the convergence of TRBV usage after thymic selection (difference of out-of-frame versus in-frame TRBV usage divergence) was not significantly dependent on the number of shared HLA alleles (R = 0.12, P = 0.63). High-frequency in-frame clonotypes The influence of antigen-specific reactions on selection of TRBV gene segments could be tracked by comparing TRBV gene usage in naïve and antigen-experienced T cells. Following the same logic that we used above for the approximate identification of the subset of naïve TCR beta clonotypes, we hypothesized that the most abundant clonotypes predominantly represent antigen-experienced T cell clones. We defined this population as clones representing >0.001% of all CDR3 sequences. Thus, the lower bound for this group was approximately an order of magnitude greater than the upper border set for the low-frequency clones in a given donor's T cell pool (Figure 1). Such delineation with a gap between the two subsets minimized"contamination"by naïve TCR beta clonotypes. Still, the pool of high-frequency in-frame clonotypes could contain a portion of naïve clonotypes with TCR beta CDR3 sequence variants of low complexity, that are repetitively produced in thymus due to the convergent recombination events and thus may be highly represented (15). This set of the 2,803-8,285 most abundant clonotypes per individual cumulatively occupied 13.9-46.2% of the homeostatic T cell space in each donor. These high-frequency TCR beta clonotypes were generally characterized by increased variability in TRBV gene segment usage, and related and unrelated mother-child pairs were nearly indistinguishable ( Figure 2C; Figures 3A,B, bars 5, 6). OVERLAP OF TCR BETA REPERTOIRES FOR RELATED AND UNRELATED MOTHER-CHILD PAIRS Several studies in recent years have revealed that unrelated individuals widely share TCR beta repertoires (13)(14)(15)(30)(31)(32). However, it is presently unclear whether the repertoires of haploidentical individuals are characterized by a higher level of overlap compared to unrelated donors. Additionally, for related mother-child pairs, shared TCR beta variants could conceal microchimeric T cell clones that have been physically shared across the placenta (see below). To address these questions, we performed comparative analysis of TCR beta repertoire overlap for related and unrelated mother-child pairs by quantifying CDR3 variant identity at the amino acid level, at the nucleotide level, and at the nucleotide level in conjunction with identical TRBV and TRBJ gene segment usage (i.e., fully identical TCR beta chains). We measured overlaps separately for low-frequency and high-frequency in-frame clonotypes (as delineated in Figure 1), and all in-frame clonotypes. Table 2 shows raw, non-normalized numbers of CDR3 variants shared on average by related and unrelated mother-child pairs. Normalized results are plotted in Figure 5. For all CDR3 categories, the degree of overlap was always slightly higher for related pairs, but this difference never approached a significant level compared to unrelated pairs. The highest level of overlap was observed for high-frequency clonotypes, in agreement with the previous work (15). WITHIN AMINO ACID CDR3 OVERLAPS OF EXPANDED CLONOTYPES, PERCENTAGE OF CLONOTYPES WITH IDENTICAL TRBV GENES IS INCREASED FOR RELATED MOTHER-CHILD PAIRS The CDR3 region is considered to form interactions mainly with antigenic peptide, while CDR1 and CDR2 encoded in the TRBV segment are mostly responsible for MHC recognition (33)(34)(35) Figure 1 for delineation of lowand high-frequency clonotypes. Please note that "All in-frame clonotypes" include not only low-frequency and high-frequency clonotypes, but also the medium-frequency ones. Some TRBV segments have nearly identical sequences taking part in CDR3 formation, so two different TRBV segments can often give rise to the same CDR3 amino acid sequence. However, in two individuals with similar or identical HLA alleles, proliferating antigen-specific clones with the same TRBV segment and CDR3 amino acid sequence that recognize the same peptide-MHC complex can be preferentially activated (36). Therefore, since related mother and child pairs share at least 50% of their HLA alleles, we could expect that antigen-experienced clones with identical amino acid CDR3 variants that recognize the same antigenic peptide should more often carry the same TRBV segment encoding CDR1 and CDR2 responsible for MHC recognition. To verify this hypothesis, we analyzed various repertoire pairs comprising the 10,000 most abundant amino acid CDR3 clonotypes from each individual and computed overlap in terms of shared amino acid CDR3 sequences and shared amino acid CDR3 sequences carrying the same TRBV segment (i.e., identical CDR1, 2, and 3). We then determined the ratio of TRBV-CDR3 overlap to CDR3 overlap for each mother-child pair. In all cases, the ratio was greater for related mother-child pairs (1.3-fold, ±0.16, Figure 6A). Moreover, we observed significant positive correlation of this ratio with the number of shared MHC-I alleles between individuals (R = 0.62, P < 0.006, Figure 6B; Table 3). SELECTION IN THE THYMUS DECREASES AVERAGE CDR3 LENGTH COMPARED TO THE INITIALLY GENERATED REPERTOIRE Comparison of the out-of-frame and in-frame CDR3 repertoires revealed that the former are characterized by higher average length (45.6 ± 0.4 versus 43.3 ± 0.2) and an increased number of added nucleotides (8.6 ± 0.2 versus 7.4 ± 0.1, see Figure 7A), in both mothers and children. This finding indicates that, upon recombination, the initially generated TCR beta CDR3 repertoire (the parameters of which are preserved in the non-functional out-of-frame repertoire) is characterized by higher average length, while further selection in thymus essentially shapes the repertoire toward lower CDR3 length and fewer added nucleotides. SEARCHING FOR MICROCHIMERIC CLONES TRANSFERRED ACROSS THE PLACENTA AS MATURE T CELLS It is well established that mother and child exchange cells across the placenta during pregnancy (37)(38)(39)(40)(41)(42), and that the progeny of these migrating cells persist in the new host for decades after gestation (43)(44)(45). Most authors agree that lymphoid progenitor cells commonly cross the placenta to populate the new host (45)(46)(47)(48). Some observations also indicate that mature T cells can transmigrate through the placenta (see Discussion). However, it remains to be determined whether the transferred mature T cells (hereinafter referred to as mature-microchimeric T cells) can further persist and serve as functional T cell clones in their new host. We hypothesized that the present deep sequence analysis of such a substantial portion of the maternal and fetal TCR repertoire (including the absolute majority of proliferated antigen-experienced T cell clones) could reveal the presence of transferred and multiplied functional T cell populations, albeit without the immediate ability to distinguish the direction of transfer (i.e., maternal versus fetal microchimerism). Indeed, Frontiers in Immunology | T Cell Biology microchimeric T cell clones that were initially transferred across the placenta as mature T cells (mature-microchimeric T cell clones) within a given mother-child pair should be characterized by the same TCR beta CDR3 nucleotide sequence and the same TRBV and TRBJ gene segments, which therefore could serve as a clone-specific identifier. www.frontiersin.org However,~40% of the CDR3 nucleotide variants shared between any two individuals were characterized with the same TRBV and TRBJ gene segments, in similar numbers for both related and unrelated mother-child pairs. This means 1,766-5,410 shared clonotype variants across different donor pairs (Table 2; Figure 5). This widespread sharing of identical TCR beta nucleotide variants makes the TRBV-CDR3-TRBJ identifier insufficient to distinguish clones that were physically transferred across the placenta as mature T cells with recombined TCRs from public TCRs resulting from independent convergent recombination events (15,32). Thus, if mature-microchimeric T cell clones are present, they are concealed amongst the overwhelming majority of natural public TCRs, and additional characteristics are needed to delineate them. Frontiers in Immunology | T Cell Biology It has been reported that public TCR beta clonotypes are generally characterized by a low number of added nucleotides in CDR3 (i.e., low complexity) (14,15,32). We therefore used the number of added nucleotides as an additional selective characteristic that essentially determines the probability of convergent recombination events leading to CDR3 variants that are identical at the nucleotide level (32,49). Comparison of this characteristic for all TCR beta CDR3 nucleotide variants and those TRBV-CDR3-TRBJ nucleotide variants that were shared between unrelated motherchild pairs revealed that the latter were characterized by much lower numbers of added nucleotides ( Figure 7A). The transfer of mature T cells across the placenta should not be dependent on CDR3 length or the number of added nucleotides. In humans, it has been demonstrated that there is no significant difference between adult blood and cord blood samples in the mean number of added nucleotides (50). Therefore, this characteristic should be essentially identical for both feto-maternal and materno-fetal mature-microchimeric T cell clones and for the general TCR beta repertoire. If the TCR beta repertoires of related mother-child pairs carry mature-microchimeric T cell clones of interest, we would expect to observe shaping of the added nucleotide curve proportional to the contribution of such clones to the repertoire overlap ( Figure 7B). The sensitivity of this method to the percentage of maturemicrochimeric T cell clones in the shared TCR beta population is therefore limited by the natural dispersion of the added nucleotide curves for unrelated pairs. For example, if mature-microchimeric T cell clones contribute~0.3% of the TRBV-CDR3-TRBJ overlap for a mother-child pair (i.e.,~10 out of 3,000 overlapping clonotypes, out of the~1 × 10 6 total clonotypes sequenced from each donor), the shape of the added nucleotide curve would be indistinguishable from that of an unrelated donor pair -and therefore below the sensitivity threshold of this method. In contrast, the presence of 100 mature-microchimeric T cell clones out of 3,000 clonotypes (i.e., 3.3% of shared variants) per pair of related donors could be clearly distinguished (Figure 7B), and this can therefore be considered as the approximate sensitivity limit of the method. We subsequently determined that the presence of mature-microchimeric T cell clones is undetectable in all cases, based on the added nucleotide curves for overlapping TRBV-CDR3-TRBJ nucleotide sequences for our six related mother-child pairs ( Figure 7C). Correspondingly, the average numbers of added nucleotides in the shared TRBV-CDR3-TRBJ nucleotide variants were indistinguishable for related versus unrelated mother-child pairs (data not shown). The above-described comparison of added nucleotide curves was performed at the level of distinct TCR beta clonotypes, but not sequencing reads, so that the influence of each T cell clone's relative representation within the repertoire was excluded. Similar albeit noisier results we have obtained when performing the same analysis at the level of sequencing reads (i.e., taking into account relative clonal size). As such, we have not identified any meaningful difference between the subsets of shared TRBV-CDR3-TRBJ nucleotide variants for related versus unrelated mother-child pairs that would allow us to establish detection of a subpopulation of maturemicrochimeric T cells that have been systemically shared during pregnancy as mature naïve or memory T cells, and which subsequently have engrafted and survived for years. TRBV GENE USAGE For out-of-frame TCR beta variants, which are not expressed and thus avoid any selection, TRBV gene usage was slightly more similar but generally comparable for related versus unrelated motherchild pairs (Figures 2A and 3). This indicates that inherited maternal factors associated with the TCR recombination machinery are insufficient to yield the essentially similar TRBV gene segment selection in the child. Remarkably, both within related and unrelated pairs, TRBV gene segment usage in low-frequency in-frame TCR beta clonotypes was more similar compared to that in the out-of-frame TCR beta variants (Figure 3). The equalization of the usage of TRBV gene segments in functional TCR variants (Figure 4) is probably a manifestation of selective pressure during thymic T cell selection, which should distinguish TRBV gene usage in functional TCRs from that preserved in unselected, out-of-frame TCR beta variants. This pressure on relative TRBV usage frequencies was prominent and led to significant convergence in both related (P = 0.0006) and unrelated (P = 0.0015) pairs, indicating that thymic selection essentially and similarly shapes the initial output of the TCR recombination machinery at the population level. Interestingly, thymic selection also essentially filters out the longest CDR3 variants with large numbers of added nucleotides, as can be concluded from our comparison of non-functional outof-frame and in-frame TCR beta CDR3 repertoires (Figure 7A). Modeling of added nucleotide curves for shared TRBV-CDR3-TRBJ variants between mother-child pairs based on input of mature-microchimeric TCR beta CDR3 variants in different proportions. This was derived from the curves in (A), which depict added nucleotide distributions for shared clonotypes between unrelated individuals (gray; equivalent to near-zero contribution to shared clonotypes) and for any individual repertoire (black; equivalent to 100% contribution to shared clonotypes), mixed in different proportions. Lines represent model input where mature-microchimeric TCR beta is equal to 100, 33, 3.3, or 0.3% of shared clonotypes. Shaded region shows the range for unrelated pairs. Inset shows a magnified view. (C) Added nucleotide curves for TRBV-CDR3-TRBJ variants shared in each related mother-child pair. Shaded region shows the range for unrelated pairs. Frontiers in Immunology | T Cell Biology Since TRBV gene segments encode the fragments of TCR chains that interact with MHC (33)(34)(35), we would expect that related mother-child pairs, being haploidentical (i.e., sharing at least 50% of HLA alleles), are characterized by more similar TRBV gene segment usage frequencies at the level of functional T cells compared to unrelated donor pairs due to the impact of identical HLA genes in thymic selection. Indeed, we observed that differences in TRBV gene segment usage in related versus unrelated pairs became more pronounced and statistically significant (P = 0.02) at the level of low-frequency, in-frame TCR beta CDR3 clonotypes (Figure 3). However, the general direction of TCR beta repertoire shaping was similar for related and unrelated donors, suggesting that the pressure of thymic selection is relatively homogenous in the population. The strength of this general pressure was far greater relative to the specific changes that were characteristic of related donors, which only added a minor codirectional trend (Figures 3 and 4). The subset of high-frequency TCR beta clonotypes was characterized by increased variability in TRBV segment usage, and related and unrelated mother-child pairs were indistinguishable at this level (Figures 2C and 3). This is presumably due to the fact that different antigen specificities (but not TRBV segment interaction with MHC) play a dominant role in the priming and expansion of T cell clones, and this semi-random process negates the initial correlations that we observed in TRBV gene usage at the level of naïve T cells. It should be noted, however, that the above analysis refers to low-and high-frequency clonotypes, which do not fully coincide with the naïve and antigen-experienced T cell subsets, respectively. It was previously demonstrated in other studies that recombinatorial biases might result in relatively high frequencies for certain naïve T cell clones, whereas some memory T cell clones may occur at relatively low frequencies (11,14,15). Moreover, these studies have shown a substantial overlap between the naïve and memory T cell repertoires, which suggests that a number of TCR beta CDR3 clonotypes could be associated with both subsets, being paired with either the same or alternative TCR alpha chains. OVERLAP OF TCR BETA REPERTOIRES We observed the greatest relative overlap of TCR beta repertoires among high-frequency clonotypes. This observation can be explained by the presence of common expanded antigenexperienced clonotypes recognizing the same antigens, as well as of high-frequency naïve clonotypes carrying the TCR beta CDR3 sequence variants of low complexity that are repetitively produced in thymus and may be highly represented both within and between individuals (15). In all comparisons, only slightly higher numbers of shared clonotypes were observed in related versus unrelated mother-child pairs (Figure 5). This observation is in agreement with the previous report by Robins et al. where the overlap in the naïve CD8+ CDR3 sequence repertoires was suggested to be independent of the degree of HLA matching based on results obtained from three related donors (14). Here, we have achieved a more accurate comparison by studying a larger cohort of related donors, using unbiased library preparation techniques, sequencing the samples being compared on separate Illumina lanes to protect from potential cross-sample contamination on the solid phase and performing deeper individual profiling. Even with these various methodological improvements, we still observed only a subtle trend toward increased TCR beta repertoire overlap in related individuals. However, among the shared high-frequency amino acid CDR3 variants, the percentage of TRBV-CDR3 identical clonotypes was always higher for related pairs compared to unrelated ones, and correlated with the number of identical MHC-I alleles (Figure 6). This finding indicates that optimal recognition of the particular peptide-MHC complex often requires full functional convergence of the TCR beta chain, leading to an increased share of TRBVidentical common CDR3 variants in individuals carrying the same HLA alleles. Notably, this phenomenon was observed for bulk T cell populations, where the input of CD8+ T cells was sufficient to provide correlation. This correlation would probably be much higher if we were to specifically analyze sorted CD8+ T cells. SEARCHING FOR PERSISTENT MATURE-MICROCHIMERIC CLONES In humans, maternal T cells are present in different fetal tissues (46,48,51), and may be present in the cord blood at a frequency of 0.1-0.5% of total T cells (48,52). This can represent hundreds of thousands or millions of cells, of which many are likely to be memory T cells (52) capable of further clonal proliferation. Transmigration of maternal differentiated effector/memory Th1 and Th17 cells through the placenta was recently demonstrated in mouse models (53). Transfer of mature T cells is also possible in the opposite direction, and the presence of fetal microchimeric CD4+ and CD8+ T cells was registered in maternal blood during normal pregnancy in humans, predominantly in the third trimester (41) when mature α/β T cells are circulating in the fetus at significant numbers (54). Such mature-microchimeric T cell clones could further affect immunity to solid tumors (55,56), influence transplantation tolerance (7), cause autoimmune diseases (3,4,43,(56)(57)(58)(59), or protect the child against infections he/she has never encountered before. Recent work has demonstrated that, in general, experienced clonal T cells commonly persist in the body for many years (17,60). We have observed more than 20,000 TCR beta clonotypes that persisted in a patient for at least 7 years -from 2005 until 2012even after the patient underwent autologous HSC transplantation in 2009 [Ref. (16) and our unpublished data]. Similarly, naïve T cell clones persist in the body for many years after loss of thymus functionality (61). Therefore, if the engraftment of mature T cell clones transferred from mother to child and/or vice versa is a systemic process, we could expect to be able to verify the presence of such clones by using characteristic TCR beta CDR3 variants as clonal identifiers. In our repertoire analysis, we did not observe maturemicrochimeric T cell clones at a level of methodological sensitivity of~100 mature-microchimeric clones per 10 6 analyzed TCR beta clonotypes. Still, this does not preclude the existence of mature T cell-based maternal or fetal microchimerism at levels below the sensitivity achieved in the current study, in minor number of individuals, or in pathological conditions such as autoimmune disease. It should be noted that deep TCR beta profiling methodology presently appears to be insufficiently sensitive for identifying www.frontiersin.org particular expanded mature-microchimeric T cell clones, due to the general abundance of common identical TCR beta clonotypes. The following combination of methods could offer a potential way forward: (1) deep TCR beta profiling suggesting the presence of a particular expanded mature-microchimeric T cell clone, preferably with many added nucleotides within CDR3; (2) cell sorting using a TRBV family specific antibody in order to enrich for the hypothetical microchimeric clone of interest; and (3) realtime PCR confirmation of increased microchimerism in the sorted sample. We also believe that further development of NGS profiling methods -especially in combination with the use of live cellbased emulsion PCR to identify paired TCR alpha-beta chains (62), and to potentially identify TCR beta chains paired with specific HLA molecules serving as an internal marker of microchimeric clones -should greatly facilitate future studies of mature T cell microchimerism in health and disease.
v3-fos-license
2022-08-22T06:15:33.572Z
2022-08-01T00:00:00.000
251710955
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9393073", "pdf_hash": "fe57e3369bc5eab167c54c6300dd1d191180798f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41764", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "b7921ef29e44586ff7adff62a893f78b383117ad", "year": 2022 }
pes2o/s2orc
Cardiac troponin T in extracellular vesicles as a novel biomarker in human cardiovascular disease Dear Editor, Soluble cardiac troponin T (cTnT), an indicator of myocardial injury and stress, is used in decision management for patients with cardiovascular disease (CVD). As highly sensitive assays can detect elevated concentrations of cTnT even in healthy individuals (e.g. outside ofmyocardial necrosis, electrocardiographic changes or angina) and cannot distinguish among disease conditions,1,2 a comprehensive understanding of the cTnT-secretome is an unmet need. Within the secretome, cTnT is not only present as a soluble factor but may also be contained within extracellular vesicles (EVs).3 EVs are nanoscale particles secreted by all cells, the cargoes of which can reflect the molecular composition of the cells of origin4 and indicate disease or injury.5 As EVs are easily sampled from plasma,6 they are being developed as a ‘liquid’ biopsy reflecting the disease state from the tissue of origin. Here, we advanced a fluorescence-based super-resolution microscopy technique, quantitative single-molecule localization microscopy (qSMLM), to robustly characterize cTnT-positive EVs. Importantly, we provide the first report of cTnT-secretome across a spectrum of CVDs. EVs were purified from induced pluripotent stem cell– derived cardiomyocyte cell media (CCM), representing a source of cardiomyocyte-derived EVs (Figure S1), and patient plasma (Figures S2): healthy subjects (n = 5), patients with heart failure (HF; n = 5), hypertrophic cardiomyopathy (n = 3), type 1 myocardial infarction (MI-TI, n = 5) or type 2 myocardial infarction (MI-TII; n = 5) and chronic kidney disease (CKD; n = 5). In all cases (Figures S1 and S2), EVs had intact morphology and contained canonical EV markers (tetraspanins CD9, CD63, CD81; luminal marker TSG101) with low amounts of soluble proteins. According to dot blots (Figure S2C), the CD81 content of patient EVs was highly variable, whereas combined tetraspanins had more uniform expression. Table S1 shows patient characteristics. Control patients were younger and more likely to be female. Patients with MI and CKD had a higher incidence of diabetes and CAD, whereas HF patients had lower mean left ventricular ejection fraction and higher NT-proBNP. To enable molecular quantification of EVs using qSMLM, the following five steps were performed (see the Supplemental Methods): (1) covalent labelling of membrane proteins with fluorescent dye CF568 to detect EV membranes; (2) affinity labelling of cTnT with a specific AF647-tagged antibody (in mild permeabilizing conditions) to detect cTnT-EV cargo; (3) affinity labelling of EVs with unmodified tetraspanin antibodies to isolate tetraspanin-enriched EVs onto coverslips; (4) two-colour imaging to detect EV membrane and cTnT; and (5) data analysis 7 with robust molecular counting ( Figure S3) 8 to quantify images. We first validated the assay and assessed CD81-enriched CCM-EVs ( Figure S4). Overall, ∼14% of EVs contained cTnT; on average, these EVs had a diameter of 118 nm with two detected molecules of cTnT per EV ( Figure S4D,E). Next, we affinity isolated tetraspanin-enriched plasma EVs (using a combination of antibodies against the canonical EV markers CD81, CD63 and CD9 for affinity pull-down). EV membrane and cTnT were fluorescently labelled and detected using qSMLM ( Figure 1A). Characterizations of EVs for individual subjects and controls are provided in Figures S5 and S6. The percentage of cTnT-positive EVs did not vary significantly across CVDs ( Figure 1B). Although the overall size distribution of plasma EVs was similar across all CVDs ( Figure 1C, left), cTnT-positive EVs were on average significantly larger ( Figure 1C, right) with a typically narrower range of sizes ( Figure 1D). This was in agreement with CCM-EVs ( Figure S4D,E): cTnT-positive EVs had a larger average diameter with a narrower range of sizes. Importantly, cTnT-positive EVs in MI-TI, MI-TII and CKD (but not in HF) were smaller than those in healthy donors; the difference was significant both when EVs were averaged per subject ( Figure 1C, right) and when EVs from subjects with different diagnoses were grouped ( Figure 1E). This raised the possibility that the biogenesis of cTnT-EVs may differ across CVDs. Using a highly sensitive clinical assay, we measured soluble cTnT in circulation ('hs-cTnT'). Individuals with MI-TI had the highest hs-cTnT values followed by MI-TII and CKD (Figure 2A, Table S1). qSMLM provided complementary information on cTnT content within purified EVs (number of detected cTnT molecules per EV). Interestingly, qSMLM for MI and CKD patients (compared to healthy or HF) detected a significantly lower average amount of cTnT per EV ( Figure 2B,C). Consequently, in samples with elevated levels of hs-cTnT, qSMLM revealed a smaller EV diameter and reduced content of cTnT per EV ( Figure 2D,E). No differences were observed by qSMLM between females and males for both EV size and cTnT per EV; only small variations were observed in EV size with body mass index ( Figure S7). This study reveals a striking discordance between soluble clinical hs-cTnT in plasma and the number of qSMLM detected cTnT molecules per EV across CVDs. Although our sample size was small (due to the workintensive nature of qSMLM), we comprehensively characterized cTnT-positive EVs. Our findings are consistent with a prior study that detected cTnT in large EVs from infarcted mice hearts and patients undergoing cardiopulmonary bypass. 3 Additionally, our findings point to potentially distinct mechanisms of origin for circulating cTnT across the range of CVDs. Although MI leads to myonecrosis, which yields free cTnT in plasma, changes in wall stress in HF patients appeared to precede changes in release of cTnT, suggesting a potentially different mechanism for release of circulating cTnT in HF patients, 9 such as the release of cTnT-EVs. A unique innovation of our approach is the molecular assessment of individual cTnT-positive EVs. These results may also provide context for why cardiac troponin detection in patients treated with sarcomeric modulators is not necessarily related to myonecrosis or poorer outcomes. 1,10 In conclusion, the molecular profile of individual EVs offers important biological insight into cardiomyocyte biology and refines the measurement of established biomarkers. Our study demonstrates the presence of cTnT within EVs derived from cardiomyocytes in human subjects, including healthy controls. Further, our technology differentiated the biophysical characteristics and cTnT content of EVs across different CVDs. qSMLM data captures how cTnT-positive EVs differed among patients with different causes and severity of cardiac injury, providing complementary information to clinical hs-cTnT to more comprehensively describe the cardiac secretome. Future studies to determine the prognostic implications of cTnT-EVs are warranted. A C K N O W L E D G M E N T We thank Dr. I. Talisman for manuscript editing. C O N F L I C T O F I N T E R E S T RS has served as a consultant for MyoKardia (concluded 2/28/2021), Cytokinetics (ongoing) and Best Doctors (completed 6/2021) and has been on a scientific advisory board for Amgen (ongoing). RS is a co-inventor on a patent for ex-RNAs signatures of cardiac remodelling. KVJ is a member of the Scientific Advisory Board for HTG and Dyrnamix, neither of which has played a role in this study. SD is a founding member and holds equity in LQTT and Switch Therapeutics and has consulted for Renovacor, none of which played any role in this study.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-01-01T00:00:00.000
894803
{ "extfieldsofstudy": [ "Computer Science", "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijvr.eu/article/download/2817/8875", "pdf_hash": "0dfed216e027fa850e7541200afc35ab271a0743", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41765", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0dfed216e027fa850e7541200afc35ab271a0743", "year": 2011 }
pes2o/s2orc
Digital Analysis and Visualization of Swimming Motion —Competitive swimming is a demanding sport that requires rigorous training to achieve technical perfection. Research on computer simulations of flow have improved our understanding of how thrust and drag can be optimized for better performance. However, for a swimmer translating this information to technical improvement can be difficult. In this paper, we present an analysis and visualization framework for swimming motion that uses virtual reality to display 3-dimensional models of swimmers. The system allows users to digitize their motions from video sequences, create personalized virtual representations by morphing prototypical polygonal models, visualizing motion characteristics and comparing their motions to other competitors stored in a library. The use of virtual reality alleviates many problems associated by the current video-based visualization methods for analyzing swimming motion. 1 INTRODUCTION Swimming as a sport requires years of rigorous training to develop physical capacity as well as to achieve technical perfection.Competitive swimming can hinge upon fractions of seconds to decide the winners, which has motivated research efforts to take advantage of computer simulations studying swimming motion to improve the athletes' performance.However, research on swimming motion has usually focused on the analysis of the fluid flow around the swimmer.Significant advances in the computational fluid dynamics (CFD) has enabled simulations to help manufacture efficient swimsuits [1] and better understand the mechanisms of swimming motion [2].As useful and important these research efforts are, they neglect to address some fundamental issues.Even though swimming is a widely popular sport and seeing an accurate computational fluid dynamics simulation of a world-class swimmer could help educate a swimming coach or an up-and-coming young swimmer, these effects would be improved if this process can be individualized.As these simulations and visualizations usually require high-performance supercomputers, trained experts and Manuscript Received on 19 April 2011.Email: kirmizi@gwmail.gwu.eduexpensive software to be correctly run, it is unrealistic to assume that they can be accessible to a wide range of amateur or professional athletes.Another important issue is that these simulation results can be very beneficial to a swimsuit manufacturer or fluid dynamics expert, while a coach would be more interested in seeing if a swimmer's stroke is technically sound.Traditionally, these analyses have been conducted by using videos taken from several principal viewpoints, but problems present in using 2D videos in general (e.g.limited viewpoints, loss of 3D information) and inherent in swimming (e.g.occluders such as bubbles) make this task more difficult. The visualization approach presented in this paper is motivated by the problems mentioned above.We aim to provide end users an easy to use system where they can use information about individual swimmers to transform prototypical swimmer models and compare their motions to other swimmers stored in a library; be it those of elite competitors or their teammates.Since there are many commercial systems available for flow visualization and analysis, the emphasis in our research is given to acquisition, visualization and comparison of motion rather than flow.The introduced approach allows the acquisition of motion from videos, storage of these motions in a library, morphing prototypical swimmer models for analysis (and possibly for fluid dynamics simulation [3]) and comparison using an interactive 3D visualization approach.The 3D visualization is augmented with tools to show information about speed and acceleration of joints or user selected points on the swimmer using numerical outputs, traces and graphs. The paper is organized as follows.In Section 2, related work about swimming and sports motion analysis is discussed.Section 3 describes our motion acquisition approach.In Section 4, we introduce the methodology used to morph prototypical swimmer models for analysis and visualization.Section 5 presents the visualization approach used.We conclude with a discussion about these approaches and future work in Section 6. II. RELATED WORK In simplest terms, a swimmer needs to maximize thrust and minimize drag to swim faster.Toussaint et al. [4] give an excellent overview of biomechanics involved in swimming.However, thrust and drag can be hard to measure in an aquatic Digital Analysis and Visualization of Swimming Motion Can Kirmizibayrak 1 , Jean Honorio 2 , Xiaolong Jiang 1 , Russell Mark 3 and James K. Hahn [2] for an overview.Some examples of simulations include the dolphin kick [3], using particle hydrodynamics [5], or simulating and visualizing flow around the body surface ([6], [7]) .For a swimmer, these kinds of studies are limited in their flexibility and interactivity since they are computationally expensive and usually require specialized software. For analyzing sports motion in general, there are mainly two methods.First method is based on video analysis, where improvements can be made by comparing the videos of the trainees with those of elite athletes.Lok and Chan [8] developed a model-based human motion analysis system that can track human movement in monocular image sequence with minimum constraint without markers or sensors attached to the subject.Given a clip of the video, they first manually fit the 3D human model to the subject in the first frame of the video; then do background subtraction, silhouette extraction and discrete Kalman filtering to predict the pose of the subject in each image frame.Saito et al. [9] introduced methods for sports scene analysis and visualization by integrating the tracking data from multiple videos captured with multiple cameras.They also present a method for free-viewpoint visualization of a soccer game as a technique for visualizing the soccer scene by view interpolation between real cameras near the virtual viewpoint at each frame.Additionally, there are commercial systems available for video analysis such as Dartfish [10] and Sports Motion [11].The common drawback of video-based systems is they limit the user to the viewpoint of the camera.This is even more troublesome in swimming, where the video can contain artifacts due to reflection and refraction and contain occluders due to splashes and bubbles.Furthermore, filming underwater is more restrictive and is usually limited to cameras whose locations are mechanically controlled. The alternative approach is based on virtual reality, where the athlete can learn and improve performance mainly through interaction with the virtual environment.Pingali et al. [12] use motion trajectories and heat maps which are obtained by digitizing tennis ball motion.Bideau et al. [13] use Virtual Reality to study the perception-action loop in athletes: how perception influences choices about which action to perform, and how those choices influence subsequent perception.They design a framework that uses video-game technology, including a sophisticated animation engine and use this framework to conduct two case studies: The first was a perception-only task to evaluate rugby players' ability to detect deceptive movement. The second concerned a perception-action task to analyze handball goalkeepers' responses (action) when facing different ball trajectories (perception).These case studies demonstrate the advantages of using VR to better understand the perception-action loop and thus to analyze sports performance.Motion capture in general has become very important in this sense to transfer the motions of the athletes to the virtual world, but unfortunately is of limited use in our application domain because of the inherent difficulties caused by the swimming motion being in water. Our work follows the virtual reality approach and introduces methods to adapt the current approaches to the swimming domain.Our goal is to create an intuitive and easy to use framework where swimmers can individualize the process to visualize their own motions in 3D, and make comparisons to other athletes of varying ability levels.We have chosen to focus on visualization of swimming motion rather than the flow itself since this to our knowledge has not been researched thoroughly before.However, flow visualization can easily be incorporated to our framework since we use standard OpenGL rendering methods.Furthermore, our human body parameterization approach makes this process easier by providing an athlete-specific model based on measurements, which can be used to perform CFD simulations. III. MOTION CAPTURE USING SYNCHRONIZED VIDEO SEQUENCES Although different types of motion capture systems exist already, they cannot be used to capture motion under water.Optical systems cannot be used due to the interference (reflection and refraction, water bubbles, etc.) from the water and electromagnetics based systems are not water resistant and may interfere with the swimming motion.Thus, there is a need for a system that can accurately capture motion of swimmers.Through a motion storage module, digitized motion can be stored in online databases and can be used to analyze and visualize swimmers in 3D. The module provides a semi-automated system for the user to mark the location of the limbs on a number of frames in the video.The video will accompany a still image of the swimmer standing in a T position in order to calculate the lengths of the limbs.A prototypical articulated figure is then overlaid on top of the video frame, allowing the user to see the positions of the limbs.The module outputs 3D motion data that correspond to the video.This data can then be stored using the storage module and be viewed by the visualization and analysis module.Since one of the most common types of video is one taken from the side (e.g.Fig. 1, Image B), we started with the assumption that this is the only footage available and implemented the system accordingly.The addition of the other orthogonal synchronized viewpoints can improve the acquisition of a more robust and accurate motion.In order to determine the relative lengths of the limbs, a still image of the standing swimmer in a T-pose is taken from the front.The user first clicks on the positions of all the joints in the still image.Using these joint positions, the lengths of each bone is calculated.The user then clicks on the joints on each of the video frames. The lengths of the limbs are used to calculate the z-coordinate of the joint position.Since the joint positions provided by the user are in two dimensions (x and y coordinates), the third dimension (z) needs to be calculated by the system.This is done by calculating the ratio of the limb lengths from the still image and the length visible in the 2D video image.Since we know the projected limb length in the image (p) and the real length measured before (r), the depth (d) can be calculated using simple trigonometry with (1): This has to done in a hierarchical manner where depth of each joint in the skeleton is calculated relative to its parent joint.The only missing piece of information left is the orientation of the limb -whether it is going into the screen or coming out.This can be either provided by the user, or derived from the second synchronized video sequence.Once the joint positions are calculated, they are converted into joint angles for each joint.Vectors defining the orientation of each of the two bones are first calculated using the joint positions.The dot product of these two vectors then gives us the angles between them.A video sequence of a breaststroke was used to test the system.The user then selected a few frames from the video and defined the joint positions in the video.The resulting digitized motion data is shown in Fig. 2. IV. HUMAN BODY SHAPE PARAMETRIZATION USING STANDARDIZED ANTHROPOMETRIC MEASURES For effective visualization and computer simulations, we need an accurate representation of the swimmer's body.Range scanning allows extraction of very detailed human body shapes.However, several artifacts are also introduced such as noisy parts and holes.Substantial effort is needed in order to convert these geometries into models suitable for animation, fluid dynamics simulation and motion parameterization.While animation requires information about inner structures (e.g.skeleton); motion parameterization additionally requires the mapping of specific anthropometric measurements (e.g.shoulder-elbow length) into the resulting models.Furthermore, the range scanning process is expensive and is not likely to be available to many athletes.In order to alleviate these problems, we propose to morph a prototypical human body mesh into particular swimmers. Prior research has focused on face deformation for different goals, such as: automatic face generation from anthropometric measurements [14], facial reconstruction for postmortem identification of humans from their skeletal remains [15], as well as growth and aging simulation [16].Our method parameterizes human body shapes by using standardized anthropometric measurements.Three different control layers provide a method for easily generating several shapes with the input of: (i) weight and stature, (ii) weight and bone lengths (i.e.weight, stature, sitting height, biacromial breadth, bitrochanteric breadth, upper arm length, fore arm length, hand length, thigh length, calf length, foot length), or (iii) all the measurements.For the coarse control layers, the remaining measurements are calculated by linear regression using the 1988 Anthropometric Survey of US Army Personnel. Our model includes three measurement categories [17]: (i) Euclidean distances between two landmarks, (ii) axial distances between two landmarks with respect to an axis, and (iii) tangential distances, i.e. the distance between two Ankle circumference t Foot length a landmarks over the surface of the skin.Table I shows the thirty three anthropometric measurements included in our model as they explain most of the representative changes in mass [17], [18]: The anthropometric measurements are mapped into a set of linear constraints among 114 landmarks, which are well-defined features over the skin, usually with respect to bone structures.For instance, the upper arm length is mapped as an axial distance between the Olecranon and the Acromiale, and the wrist breadth is mapped as a Euclidean distance between the Radial Styloid and Ulnar Styloid. Fig. 3 shows our three constraint categories.An axial constraint is given as a goal distance r between two unknown landmark coordinates ti and tj with respect to a principal axis, usually Y.This is a linear constraint, since only one coordinate (either X, Y or Z) needs to be modified in order to reach the goal distance.A Euclidean constraint is given as a goal distance r between two unknown landmark coordinates ti and tj.We linearize this constraint by generating one axial constraint for each of the three principal axes.Finally, a tangential constraint is given as a goal distance r between two unknown landmark coordinates ti and tj.We approximate this constraint as a set of Euclidean constraints between the m vertices in the shortest path from ti to tj in the prototypical mesh.Given that the above constraints were previously linearized, we iteratively solve an ordinary least squares problem as in [16] until a small error is reached.Thus, we compute a new set of 114 anthropometric landmark coordinates that best meet the anthropometric measurements of a particular person. The prototypical mesh is then morphed by using radial basis function (RBF) deformation with a biharmonic spline as in [15,16].The RBF function maps the original 114 landmarks to the ones computed above, and allows deforming every vertex in the prototypical mesh as well as the skeletal structure.Results of our system are shown in Fig. 4. The resulting mesh has an accurate representation of the skeleton, which is important for the visualization and comparison of different swimmers; and an accurate representation of the body surface, which can be crucial for fluid dynamics simulations. V. VISUALIZATION AND ANALYSIS OF SWIMMING MOTION In Sections 3 and 4, we have discussed methods to acquire the motion and to transform polygonal models to match the body types and measurements of the desired swimmer.The last step in our approach is presenting this information to the user.We have implemented an interactive visualization system that offers various visual and numerical analysis tools. We start with the polygonal mesh that has been deformed to fit the skeletal and pose information acquired during preceding steps.In other words, we use a model where the skeleton 'drives' the mesh.There is a wide body of literature and a number of commercial software solutions for mesh deformation, a subject beyond the scope of this work.We used Autodesk MotionBuilder™ to animate the polygonal model and saved each resulting frame.Intra-frame animations are done by interpolating in-between vertex positions.This provides us with the animation of a swimmer and the corresponding joint angle values for one or more stroke cycles (depending on the number of cycles acquired in the previous steps An interactive 3D visualization approach has several advantages compared to 2D video-based visualization.First, the user has unlimited control over the viewpoint.This gives the ability to rotate around and zoom to the visualization freely and presenting multiple complementary views consequently (Fig. 5).These multiple views are synchronized and help the understanding of complex motions by displaying possibly occluded regions. Depending on the information required by the user, the animation can be presented in different ways: using a solid rendering with shading, a vertex rendering mode to visualize the skeleton as well as the swimmer, and a silhouette rendering mode to put more emphasis on the change in the joint angles (Fig. 6).Shadows (or more precisely, projections to X, Y and Z planes) can be added to help the user understand the body motion in 3D from a single view (Fig. 6b). One of the main advantages of using a skeleton driven animation approach is the presence of numerical information about the swimmer's pose in any temporal location.We take advantage of this by using a variety of visualization techniques. The first is displaying joint angles in their respective positions (Fig. 7a).In addition to this, the user can click: • Any point on the swimmer and display the current position, velocity and acceleration, • Two points to display the angle and distance between them (Fig. 7b), • Three points to display the angle formed by these three points. The second analysis tool our visualization system provides is displaying traces of joint positions.This way, the users can see how the joint locations change over time and make necessary improvements to their techniques.A swimmer would gain valuable and unique insight by being able to see the amplitude of their leg kick, the vertical displacement of the hips when they lift their head for a breath, or how much they press their chest downward in butterfly -all aspects that are known to affect performance but are difficult to quantify or visualize.Orthogonal projections of these traces are also provided to improve the understanding of their spatial relationships (Fig. 9).Even though our system provides valuable information about the motion of individual swimmers, the real benefits of these methods are realized by using the presented information for comparison.By using a library of stored motions, we can display multiple swimmers simultaneously (Fig. 8a).Swimming is a very technical sport that requires cyclical and precise motions athletes perfect after intensive training.Since the stored motions in the library are normalized in our acquisition step, swimmers of different capabilities can be displayed together.This can be a powerful comparison tool, which would enable an amateur swimmer to compare his swimming style to an Olympic champion.Even Olympic-level swimmers study video to compare and learn from other Olympians, so the flexibility in comparing techniques and ease of seeing the sometimes subtle differences is critical for an analysis tool to be practical and widely used.Without normalization, this comparison would be very difficult because the speed of an elite swimmer would no doubt be drastically different from an amateur, making a side-by-side comparison less effective.In the comparison mode, two sets of motion data can be loaded from the motion library.The users can take advantage of any combination of visualization tools described above: for instance, one swimmer can be displayed in shaded rendering mode while the other is shown as a silhouette, with traces added for selected joints to see how the motions differ between them (Fig. 8b). In summary, our visualization system provides users with various tools to display, analyze and compare the digitized motions of one or more swimmers.The tools presented here were chosen to address the problems in analyzing swimming motion.To our knowledge, this is the first visualization and analysis system designed specifically for swimming. VI. DISCUSSION AND FUTURE WORK In this paper, we presented a framework to individualize the analysis of swimming motion.By transferring the captured motion from 2D video to 3D rendering, a more flexible and effective visualization can be achieved.The process enables the athletes of various ability levels to digitize their captured motions, and morph prototypical polygonal models to end up with a visualization tailored to that specific swimmer.These models can then be used to visualize and compare with other motions stored in a library, enabling swimmers to compare their techniques to world-class athletes and finding areas to improve.The system was designed in close collaboration with USA Swimming, the governing body for the US Olympic swimming team, and received very positive responses from swimming coaches.Our aim is that the system should be used by athletes and coaches of various backgrounds.Therefore, special care was taken to ensure the compatibility to a variety of software solutions.The pose information, which drives the polygonal model is stored with Acclaim (amc/asf) pose file standard.This means the users could use standard motion editing software or import motion capture data to our system.We aim to make the visualization software available to the public in the near future.Even though designed for swimming, our visualization system is general enough to be applied to a variety of sports applications where the analysis of subtle changes in a cyclical or repeated motion is important (e.g.cycling, baseball pitches, football throwing motion).The focus of the work presented here was analysis and visualization of motion, as we believed this was an important aspect that has usually been overlooked in swimming research where flow visualization received more attention.However, flow information can be easily incorporated into our visualization framework.Moreover, the morphing methodology introduced here can be used to perform athlete-specific fluid dynamics simulations. One shortcoming of our system is the somewhat labor intensive process of motion acquisition.However, the specific nature of swimming eliminates the possibility of motion capture.Computer vision based techniques are also challenging because of the artifacts such as water splashing or bubbles that occlude the swimmer in video.Furthermore, since the swimmer's body is partly above and partly below the water surface, a computer vision based system would need multiple cameras that need to be synchronized.Even though such complicated setups are not widely available, we believe fully automating the motion acquisition process would greatly improve the usability of our visualization approach by increasing the number of stored motions in the motion library. 1 1 Department of Computer Science, The George Washington University, USA 2 Department of Computer Science, Stony Brook University, USA3USA Swimming environment and the measurement process can alter the very values we are trying to observe.These difficulties made Computational Fluid Dynamics simulations of swimming motion a very attractive area of research.Recent efforts to study swimming motion has focused on these simulations, interested readers can refer to Marinho et al.
v3-fos-license
2018-04-03T04:35:07.272Z
2016-11-21T00:00:00.000
8990252
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0166908&type=printable", "pdf_hash": "970166922d15d810aceffe22ca4544de21a7e175", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41766", "s2fieldsofstudy": [ "Psychology" ], "sha1": "970166922d15d810aceffe22ca4544de21a7e175", "year": 2016 }
pes2o/s2orc
How Simple Hypothetical-Choice Experiments Can Be Utilized to Learn Humans’ Navigational Escape Decisions in Emergencies How humans resolve non-trivial tradeoffs in their navigational choices between the social interactions (e.g., the presence and movements of others) and the physical factors (e.g., spatial distances, route visibility) when escaping from threats in crowded confined spaces? The answer to this question has major implications for the planning of evacuations and the safety of mass gatherings as well as the design of built environments. Due to the challenges of collecting behavioral data from naturally-occurring evacuation settings, laboratory-based virtual-evacuation experiments have been practiced in a number of studies. This class of experiments faces the traditional question of contextual bias and generalizability: How reliably can we infer humans’ behavior from decisions made in hypothetical settings? Here, we address these questions by making a novel link between two different forms of empirical observations. We conduct hypothetical emergency exit-choice experiments framed as simple pictures, and then mimic those hypothetical scenarios in more realistic fashions through staging mock evacuation trials with actual crowds. Econometric choice models are estimated based on the observations made in both experimental contexts. The models are contrasted with each other from a number of perspectives including their predictions as well as the sign, magnitude, statistical significance, person-to-person variations (reflecting individuals’ perception/preference differences) and the scale (reflecting context-dependent decision randomness) of their inferred parameters. Results reveal a surprising degree of resemblance between the models derived from the two contexts. Most strikingly, they produce fairly similar prediction probabilities whose differences average less than 10%. There is also unexpected consensus between the inferences derived from both experimental sources on many aspects of people’s behavior notably in terms of the perception of social interactions. Results show that we could have elicited peoples’ escape strategies with fair precision without observing them in action (i.e., simply by using only hypothetical-choice data as an inexpensive, practical and non-invasive experimental technique in this context). As a broader application, this offers promising evidence as to the potential applicability of the hypothetical-decision experiments to other decision contexts (at least for non-financial decisions) when field or real-world data is prohibitively unavailable. As a practical application, the behavioral insights inferred from our observations (reflected in the estimated parameters) can improve how accurately we predict the movement patterns of human crowds in emergency scenarios arisen in complex spaces. Fully-generic-in-parameters, our proposed models can even be directly introduced to a broad range of crowd simulation software to replicate navigation decision making of evacuees. Introduction Over the last two decades, a rapidly growing interest has developed among researchers in a variety of scientific disciplines towards understanding the nature of pedestrian crowds. A great deal of research has been carried out to enhance the state of knowledge on behavioral mechanisms that govern the motion of large numbers of pedestrian humans in different environments and settings. Attention to the problem was particularly prompted by the occurrence of a number of recent mass crowd events culminated with disasters and casualties [1,2] that heightened awareness as to the importance of crowd management and control. Owing to the growing urban populations and the increasing frequency of similar mass events, this continues to be a critical safety-related research stream [3,4]. It essentially aims at developing reliable forecast tools that enable planners to reduce the likelihood of such disasters by conducting virtual (or simulated) evaluations on the design and performance of public crowded facilities. Despite major advances in this field, it is still thought of as an intractable problem many fundamental aspects of which are yet to be investigated [5]. The intrinsic complexity and variability of human behavior (as the constituent elements of crowds), the variety of circumstances and geometric features that need to be dealt with (ranging from large open areas, to complex confined spaces such as public transportation facilities or high-rise buildings), and the variety of contexts (such as entertainment events, sport events, political protests; or unanticipated emergency situations) that can give rise to different forms of behaviors in a crowd are among the primary challenges involved [6]. However, it is believed that the key challenge arises from the sparseness (or often lack) of the reliable empirical data [3]. Former studies have looked at the problem from a wide range of perspectives such as certain macro-scale phenomena that emerge as a result of the collective motion of crowds [7], issues affecting the psychological state of individuals in large crowds [8,9], or the impact of individuals' social association within a crowd on their behavior [10,11]. One particular topic to which a great deal of attention has been paid is emergency evacuations of crowds from confined environments. Introduction of the "panic escape" model proposed by Helbing et al. [12] inspired by the notion of Newtonian mechanics was followed by the development of a great number of models and methodological approaches in the field. One of the main features of this model is the recognition of individual escapees as the main elements of crowds, as opposed to more traditional approaches that assume escaping crowds as continuous fluid-type bodies [13,14]. This individual-centered approach, in general, postulates that the phenomena that emerge at aggregate level are engendered as a result of the disaggregate-level interactions. Owing to the reasonableness of this assumption, the approach has appealed to many researchers in this field. However, it requires replication and modeling of the relevant decisions made by each individual evacuee (as modeling entities). Two primary types of decisions pertain to the problem, one relating to the local (also known as "operational-level" or "walking") decisions that individuals adopt to avoid collision with obstacles and other evacuees while progressing towards their target. The other type of decision basically relates to the rules and mechanisms that govern evacuees' "global navigation" (i.e. choosing their target, such as the exit or route of navigation). There has been an awful lot of effort towards the former type of decisions in the modern literature. A number of data provision approaches and methods have been employed ranging from experiments with stressed insects (typically, ants) [15][16][17][18][19][20][21] or mice [15,22,23] to controlled laboratory experiments [24][25][26] and video-analysis of pedestrian movements in naturally-occurring environments (although not necessarily during evacuations) [27,28]. Also, a number of influential theoretical works have proposed adequately-defined modeling frameworks validated or calibrated based upon empirical observations [29,30]. In contrast however, far less is known about the global navigation behavior of evacuees such as their exit or route choice from a modeling point of view. With regard to knowing how evacuees make their global navigation decisions, fundamental behavioral questions are yet to be addressed as to (1) the factor(s) that contribute to exit decisions [5], (2) the way tradeoffs are made between them [31]; and (3) whether those factors and their influences dramatically differ from person to person [32]. The insufficiency of the behavioral data has thus far hindered making solid inferences from the existing literature to answer these questions. The nature of this type of decisions immediately rules out the relevance and possibility of employing certain data collection techniques, such as experiments with mass bodies of panicked insects or animals [33,34], and leaves researchers with fewer options. Researchers in this field have generally adopted two main approaches to gather empirical evidence for this particular problem. One has been conducting decision experiments in different forms of laboratory-type virtual-reality trials (i.e. virtual evacuations) ranging from static experiments in relatively simple geometric settings [35][36][37] to interactive experiments in the form of computer games [31,38,39]. A second approach that has been used by a fewer number of studies has been studying the behavior in action by conducting simplistic controlled mock evacuation trials in the "field" [40][41][42] or more precisely what experimental economists often refer to as "lab-like field experiments" or "framed field experiments" [43] (also, see [44] for a detailed taxonomy of the range of field experiments from an experimental economics perspective). As mentioned earlier, modelling observations of emergency evacuations in fully-natural settings (i.e. field data in its classical definition and meaning) are extremely rare. Merely, for the sake of the easiness of the communication, the term "field-type" in this paper is used with a slightly twisted (although not awfully imprecise) meaning compared to its traditional concept. Here, we refer to the controlled simulated evacuation trials (that, strictly speaking, are still of laboratory nature but are more realistic than the purely hypothetical settings) as "field-type experiments". This is purely to distinguish them from the counterpart experiments where the tight control of the experimenters is exerted over the design of the hypothetical decision situations (i.e. without any real action being observed) which we refer to as "hypothetical-decision laboratory experiments". To our knowledge, none of the studies that have made use of such field-type observations (i.e. controlled mock (or simulated) evacuation trials where the behavior of participants is observed in action) in the domain of exit decisions has provided choice data at disaggregate level that can serve as modeling materials. That has offered the experimenters of those studies not many analysis options beyond extracting simple descriptive statistics on macro-scale measures (i.e. total evacuation time, density distribution etc.) Similar to many other areas of social sciences, notably economics, virtual environments in this context are believed to offer more flexibility at lower operational and logistics costs than those of the field-type experiments. Most importantly, they allow a perfect control of variation, often described as the "foundation of empirical scientific knowledge" [45] (see page 535). Particular to this context, they allow the analyst to take tight control of the moments and situations at which participants' decisions are made, thus avoiding any ambiguity in terms of the set of alternatives or the level of attributes of those alternatives. Also the also the designer to stretch the attributes of alternatives to extreme levels, avoid correlations between attributes; and have participants make meaningfully complex tradeoffs [46]. Fairly large samples can be collected at reasonable amount of cost and within a reasonable course of time. More importantly, pertinent to this particular context again, they provide a safe and ethical environment for studying a range of potentially-dangerous situations that do not occur on a day-to-day basis in real life but might have significant consequences. However, it is highly unknown whether or not the inferences made based on the hypothetical-choice experiments carried out in virtual settings are consistent with activities in natural environments and thus can reliably be generalized to real world situations. The problem is also referred to as "hypothetical bias" or "contextual bias" [47] in the economics domain. Hotly debated in the literature of social sciences [45,48,49], the crucial question is whether generalizations should be merely limited to qualitative behavioral insights when certain data patterns are observed, or the analyst can also go beyond that and even make reliable quantitative extrapolations. The question of generalizability has been greatly discussed in economic literature (as well as in other areas of social sciences) and has been described as the most fundamental question in the experimental economics [48]. This discussion will become even more fruitful in this particular topic by taking into consideration the certain context-specific elements that do exist in the real-world evacuations but are failed to be replicated by experimenters in vast majority of the virtual experiment settings. Because of the lack of objective comparative investigations connecting these two types of observations, as well as the sensitivity of the topic and its potential implications, researchers in this area have remained overly skeptical and cautious about authenticity and external validity of the findings obtained from hypothetical-choice observations collected in virtual evacuation studies [50,51]. On the other hand, (framed or controlled) field experiments are regarded as a more plausible form of collecting empirical observations. They have been thought of as a great compromise between "controlled variation" and "realism" in this context, similar to their description in the economics literature as "an attractive marriage" between lab and real world [49]. The question of external validity, to our knowledge, has particularly remained unaddressed in the context of evacuation modeling due to other paucity of the real-choice data. Here, we make a pioneer attempt in addressing the aforesaid problem by linking between two types of empirical choice observations. We report on a simple laboratory-type experiment of hypothetical emergency exit choices and their imitations in a more realistic setting in action (i.e. a framed field-type experiment). Two exit-choice datasets are generated accordingly, "stated" or "hypothetical choices" (HC) and "revealed" or "real choices" (RC) of emergency exit. The hypothetical exit scenarios were simply presented to participants as pictures. Participants were sampled from the pedestrians who exited a particular building, to which the hypothetical scenarios were referred (i.e. decisions were hypothesized to be made in an imaginary emergency exit occurring at the same building which the participant had just left (in a normal situation). We interviewed 169 individuals and each participant responded to 14 hypothetical exit scenarios. This provided us with a dataset of 2338 HC observations. In the field, similar scenarios were then replicated in a more realistic fashion by conducting a number of mock (i.e. simulated) evacuation trials in an artificially-built model of (the floor level of) the same building depicted in the abovementioned pictures. Paid participants were recruited to perform the instructed evacuation tasks in the model building (up to 150 individuals performed each trial scenario). The experiments were recorded and then the escape movement of each participant was analyzed individually. Their movement trajectory was extracted using specialty software. The trajectory, body movement and head orientation of each participant was closely inspected to identify the likeliest moment their decisions were made. This provides us with a data set of 3015 RC observations. Econometric models were then estimated on each dataset. For the modelling, we employed a well-established class of econometric models known as discrete-choice models to infer from individuals' choices their evaluation, perception and prioritization of different attributes that contribute to their navigational decisions. We estimate fixed-parameter multinomial logit (FP-MNL) and random-parameter multinomial logit (RP-MNL) models as two well-practiced axiomatic paradigms of modeling human choices conceptualized by the notion of random utilities. A joint model estimation is also performed on the combined data using a generalized multinomial logit (GMNL) method [52] in order to identify possible parameter scale differences between the estimates inferred from the HC and RC contexts (as a measure of the randomness of decisions made in each context). Hypothetical-choice experiments The HC experiments were carried out during March and April 2014. The experimental procedure was approved by the Monash University Human Research Ethics Committee to which the authors were affiliated at the time of conducting the experiments. All methods were carried out in accordance to the approved guidelines. Also informed verbal consents were obtained from all subjects who accepted to participate in the experiment. The verbal consent method was chosen in order to facilitate the process of interview considering that the interviews took place at public places, and this was approved by the abovementioned ethics committee so long as the interviews were kept anonymous which was the case in this experiment. Participants were compensated for their time at the end of the interview by a modest amount of cash. We interviewed pedestrians as they exited a building in Melbourne, Australia. Participants were sampled from exiting pedestrians. They were invited to a survey in which they were introduced to 14 hypothetical exit-choice scenarios. The choice scenarios presented a tradeoff between certain factors that are assumed to affect the choice of exit during an emergency evacuation (the design variables). The choice scenarios assumed that the evacuation is taking place in the floor level of the same building that the participant had just exited. The hypothetical position of the subject (as well as the other attributes) was shown in each scenario and the subjects were asked to hypothesize the presented decision situation as an emergency scenario and choose the exit that they would choose if an emergency egress arose in that building. The variables (exit attributes) whose relative weights are quantified by our models are: the number of evacuees "congesting" around each exit area (CONG), the spatial "distance" of the decision maker to each exit (DIST) (measured in meters), whether the exit is "visible" or "invisible" to the decision maker (VIS) (a 0-1 categorical variable with 1 corresponding to the visible exits), and the number of moving pedestrians (the size of directional flows) headed to each exit (denoted as FLTOEX, standing for "flow towards exit"). We distinguish, in our modeling, between directional flows based on the visibility of the exit to which those flows are headed. We label the flows headed towards visible and invisible exits as FLTOVIS and FLTOINVIS, respectively. When the exit is invisible as a result of the presence of architectural obstacles, the congestion level associated with that exit is not presented to the decision maker (Fig 1(A)) but those exits were allowed to still be chosen by the participants. In total, 28 scenarios were designed and were divided into two blocks of 14 scenarios each. The participants were randomly assigned to one of the two blocks. Thus each subject only responded to 14 choice situations. The reason for generating multiple blocks (as opposed to one single block of 14 scenarios) was to create broader attribute range and greater attribute variability in the HC data that is essential for efficient parameter estimates (see [53] for a more detailed discussions on the variability issue). All 28 choice scenarios can be found in S1 Fig. A minimum sample size calculation was also carried out before sampling according to [54] to ensure a sufficient number of observations is gathered. In total, 169 individuals were surveyed resulting in a data set of 2338 HC observations. The spreadsheet file containing this dataset can be accessed in S1 Table. There were also a few reasons we chose interviewing participants in the location to which the scenarios referred (i.e. the building whose map appeared in Fig 1) over the conventional computer-based counterpart experimental method conducted in the form of online surveys. An important difference between experiments of this type and many experimental games in economic studies is that there is no financial consequence associated with the decisions that individuals make in these choice experiments. In best cases, fixed modest monetary incentives are offered to each participant regardless of their responses to the experimental scenarios (as was the case in our experiments). Hence, there is always a concern that unreliable responses may be provided as a result of participants' inattention or lack of interest or motivation [55,56]. It is believed that by linking the hypothetical scenarios to a similar decision context that participants had just experienced in real life and of which had a clear memory and also by articulating and explaining the purpose and importance of the experiments on a face-to-face basis, participants are likelier to relate more realistically to the experiments [57]. Recent studies have provided emerging evidence suggesting that idea of referencing HC choice experiments relative to a real experience has offered promise in the derivation of reliable estimates and mitigation of the hypothetical bias [47]. Also, we assumed that participants' inattention was likelier to be prevented as a result of being scrutinized by the interviewer who tried to keep them engaged [56]. Also, the egress geometry was likelier to be perceived more realistically and accurately as a result of participants having a clearer memory and perception of the location to which the scenarios referred. In another publication we quantified the extent to which this method succeeded to reduce the variance of random noises in the resultant models compared to a sample collected on a purely-random basis from students in lecture classes (where we eliminated the element of face-to-face interviews and referencing to the real-choice context) [58]. The experimental procedure was approved by the Monash University Human Research Ethics Committee to which the authors were affiliated at the time of conducting the experiments. All methods were carried out in accordance to the approved guidelines. Also informed consents were obtained from all subjects who accepted to participate in the experiment. The participants were invited using electronic mail invitation letters and used an online registration system to accept the invitation whereby they provided written consent as part of their registration. The registration procedure was also approved by the abovementioned committee. Participants were catered for and were also compensated for their time at the end of the trials (that lasted from about 10:00 a.m. to around 3:00 p.m.) Field-type (realistic-choice) experiments One hundred and fifty participants were hired to conduct the trial tasks. In total, 20 trial runs were carried out treated as emergency scenarios, with each scenario being a combination of certain design factors including the number of evacuees (either 75 or 150), the quantity of the available exits (2, 3 or 4), the position (i.e. spatial distribution) of the exits and the width (i.e. capacity) of the exits (either 50 cm or 100 cm). Full details of the trial runs can be found in S2 Table. Participants were instructed to wait at the initial position of the evacuation room (See S2 Fig) and upon the start of each run, they were asked to enter the model building compete with others to escape from the room as quickly as possible assuming that they were running away from an acute threat (no specific type of threat was mentioned). The initial positions were randomized from run to run so that different individuals face different choice situations at each run (i.e. to avoid certain individuals locating in front of the crowd every time and thus not facing much congestion in any of their choices). They were kept motivated by the experimenters to keep running and looking for best escape options using verbal messages that repeated throughout each run. A square-shape obstacle positioned inside the room replicated a major architectural barrier that was present at the corresponding position in the real building. The participants were not aware of the exact configuration and the setup of the model building at each run in order to avoid a habituation effect, however they knew that there will be multiple escape alternatives (i.e. exits) at each run and that they will be able to find exits at both sides of the obstacle inside the room. A camera located at 8 meters height above the floor (whose legs were positioned inside the aforementioned square obstacle) recorded the trials and subjects' movements. Fig 2 shows the snapshot samples from the raw footage of the two trial runs. Also, the raw footage of the two of the trial runs have been sampled and can be found in S1 and S2 Videos. We had the participants wear colored beanies to facilitate the automatic recognition and tracking of their positions inside the evacuation room with the software that we used for this purpose. We used a specialty software called PeTrack [59] developed by the fourth author of this paper particularly for the analysis of the pedestrian movements and extraction of their trajectories under experimental conditions. The parameters of the software were calibrated according to the conditions of the experiments in the field (further details are provided in S1 Text). The trajectory of the movement for each individual appeared in these scenarios was extracted. One major challenge that we faced in terms of converting the experimental observations to choice data was identifying the moments at which individuals had made their decisions. This clearly did not pertain to the HC experiments as the subjects had to make decisions at situations pre-specified by the experiment design in that case. Here, however we had to resort to any available external clue and indicator in order to identify the decisions. In doing so, in addition to the trajectory of their motion, their body movement and head orientation were closely examined as they moved to identify the likeliest moments when each subject made their exit decision. For this purpose, we attended to the movement of each subject individually looking for the moments after which they did not considerably explore different alternatives. This was typically accompanied by consistency in their body movement and their trajectory (e.g. no major change of movement direction). We defined such moments in the subjects' movement as their "decision moment". Since the process of choice extraction required human judgment (as described above), we decided not to automate the choice extraction phase, thus all the observations were collected manually by inspecting every single subject's movement in the PeTrack environment. The data extraction process was executed by 11 coders in total, including the first and the third author of this article. In total, 9 paid coders (6 undergraduate and 3 postgraduate students from the institute of the first three authors) were recruited and trained to perform the data extraction procedure (that in total demanded nearly 500 hours of work). Multiple training sessions in groups of 4-5 were practiced in order to ensure consistency between the works of different coders. Yet, as the decision identification process could be subject to the judgment of the coders, the sensitivity of the inferred parameter estimates was examined during the preliminary data analyses by performing model estimations on different blocks of the observations generated by different coders. An acceptable level of inter-sample variations was observed between the data segments provided by different coders indicating of no systematic bias engendered by the judgment of individual coders. Also, another matter to be taken into consideration in relation with this issue is that minor errors in finding the decision moments are not supposed to make a drastic difference considering that the dynamics of the evacuation space (and as a result the attributes of different alternatives, relative to one another) do not show rapid variations over incremental amounts of time. In more specific terms for example, the exit that is most congested at time t is very likely to remain the most congested exit at times t+Δt or t-Δt as long as Δt is a relatively small increment. The same concept applies to the relative distances of the subject to different exits, or the visibility status of the exits to the subject. However, it might be the case that the flows moving to an exit at time t may turn into congestion around that exit at t+Δt and vice versa. We regard this as an aspect of the data extraction procedure that may be most sensitive to the identification of the exact decision moments. Fig 3 exemplifies two subjects and their trajectories at their identified decision moments. For some subjects who clearly made more than one decision (individuals who made an initial decision, progressed towards it and then changed their choice (often in response to the excessive congestion developed around their chosen target)) more than one choice observation was extracted. Once the decision moment was identified, the chosen alternative (i.e. the dependent variable), the choice set and the attribute levels of all alternatives (i.e. the independent variables) were recorded as a single "choice observation". The measured variables were exactly the same variables as discussed previously for generating the HC scenarios. Fig 4 illustrates more details on how attribute levels were measured at extracted decision moments. A data set of 3015 RC observations was collected in total. The serial correlation effect (the fact that the same group of people made decisions and contributed observations to the data from run to run) had to be downplayed in the RC data considering that recognition of individuals' identity between different runs was impossible from the footage (as participants all wore similar beanies). They could only be recognised individually within each run, and this effect has been embodied by the RC data (clearly, for any individual who contributed multiple observations in each single run). The extracted data file can be accessed in S3 Table. Analysis and Modeling Method Individual n's choice among J n alternatives at choice situation t (t = 1, . . ., T n ), is the one with maximum utility, where the utility functions are as Eq 1, consisting of a deterministic part V nit and a random disturbance part ε int . The vector x nit is the vector of all attributes that appear in all utility functions (Eq 2) (defined previosuly), ε nit represents random unobserved factors of utilities, β n is the vector of utility coefficients (with probability density function denoted as g (β n )), and θ n is the utility scale factor. The coefficient vector is specified as β n = β + Γω n where β is the vector of means of coefficients, ω n is a vector of independent standard normal variables, and Γ is the Cholesky factor of the covariance matrix, S = ΓΓ 0 (see [60] for an exact definition). Random unobserved factors are assumed to be distributed identically and independently as (standard/normalized) Extreme Value Type 1 with the density function given as f ðε int Þ ¼ e À ε int e À e À ε int (see [61,62] for further information about the characteristics of this distribution). The scale factor is also specified as y n ¼ e dz n with z n , the scale dummy variable, being a 0-1 binary variable that equals 1 when the choice by individual n has been made in the HC context. The above specification relates to the GMNL model where the two datasets are pooled (i.e. HC+RC data). The model reduces to RP-MNL when δ = 0, and will collapse further to FP-MNL when δ = 0 and ω n = 0 at the same time. The probability of the individual making the same sequence of choices they observed to have made in the data i = hi 1 , . . ., i T i is given in Eq 3 and is introduced to the likelihood function as the computational term associated with the information obtained from that individual. The log-likelihood function (LL) is defined in Eq 4 and the estimates are inferred from a simulated maximum log-likelihood procedure. The goodness-of-fit measure also known as pseudo rho-squared is defined as the ratio of the improvement in log-likelihood after reaching the convergence (i.e. at maximized log-likelihood, LL Ã ) relative to its initial value (i.e. where all parameters are set at zero, LL 0 ) (Eq 5). Results Table 1 provides the result of the parameter estimates using the two models (i.e. FP-MNL and RP-MNL models) specified in the previous section as well as the joint estimation results (i.e. the GMNL model). Also, Fig 5 provides a visual presentation of the mean of the estimates by depicting their 95% confidence intervals. In terms of the goodness of fits, the models estimated on RC observations prove to have provided a substantially better statistical fit to the data. Here in the followings, we put into contrast the models estimated on the two different observation types from a number of relevant econometric and behavioral perspectives. Sign and significance of the estimates Models estimated on both HC and RC datasets show a general agreement between the two observation types as similar estimation patterns are clearly observable (Fig 5). There is a perfect agreement between the HC and RC-based models in terms of the sign of the estimated coefficients as well as their statistical significance (of the means). All the variables specified in our models proved to be statistically meaningful at highest conventional statistical levels. As intuitively expected, the distance (DIST) and congestion (CONG) factors both impact negatively on the utility of the exits, and visible exits generate higher utilities than invisible exits (reflected in the positive mean estimate of the VIS variable). Both types of observations also agree upon an important estimation finding suggesting that the perception of observing the directional (or moving) crowd flows by decision makers significantly depends on the visibility of the exit to which that flow is moving. The hypothesis that the utility coefficients associated to FLTOVIS and FLTOINVIS variables are equal is rejected (by both of our datasets and both FP-MNL and RP-MNL modelling methods) at conventional confidence levels. In other words, it is strongly suggested by all the estimated models that flows of evacuees are perceived as negative utility elements when the exit to which that flow is headed is visible to the decision maker. Whereas, such flows are suggested to (on average) add to the perceived utility of the exit when the decision maker themselves cannot see the exit targeted by the flow. Following versus avoiding others Strongly suggested by both observation types, the recent finding discussed above (i.e. the difference between how moving flows are perceived based on the visibility of their targets) has important implications as to our understanding of the role played by the social influences in making emergency navigational decisions which is often referred to as "herding" or "followthe-crowd" behavior in the literature [3,4]. There has been a widespread conventional belief in the literature (often based on pure speculations and anecdotal evidence) as to the possibility of the formation of a dominant tendency during evacuations to follow what others do for their escape [12]. Important relevant fundamental questions, however, are yet to be answered based on concrete empirical evidence. It is still not known with certainty whether the herding behavior per se can be considered as a sole determinant of the evacuees' escape decisions, nor are the role of context-specific factors-that may strengthen or moderate this tendency-adequately understood. Also, it is not clear whether the tendency to follow others is a global behavior among all individuals involved in an emergency or it only is exhibited by a portion of evacuees (i.e. the problem of preference/perception heterogeneity). Is it, for example, reasonable to classify evacuees as those who do and those who do not follow others, and even if so, how the population is split on that account. DIST The relation between the social influence and decision making in general has been vastly studied and discussed in the social sciences. The existing literature indicates a substantial level of context-specificity attached to the problem. For example, Faria et al. [63] found out based on a series of field-type experiments that, similar to groups of animals [64], when reaching a target, individuals of a group are capable of identifying informed individual(s) without the aid of obvious signaling and follow them to enhance the group performance. Gallup et al. [65] showed how visual attention of pedestrians in a crowd can in limited ways be diverted by few individuals gazing up at a certain point. Also, in a series of computer-aided human experiments Zhao et al. [66] showed that when humans compete for limited resources in a complex adaptive system, the effect emerged from the formation of herding can vary from beneficial to detrimental depending on the biasedness of the resource allocations. Despite the aforementioned evidence, there have been limited and scattered investigations of this phenomenon in relation with the navigational decision-making of human crowds during evacuations. Useful empirical evidence, however, is provided by the work of Bode and Codling [38] and similarly by Bode et al. [31] who suggested that subjects familiarized with the egress environment do not develop a strong tendency to follow others when they can see the entire egress environment, even under stress-induced treatments in the lab. In that spirit, our model estimates are to a great extent consistent with their findings. However, what is further suggested by our experiments is that, firstly, exit choices are not likely to be solely determined by a single factor (such as herding) and rather, there proves to be a combination of factors among which tradeoffs are made (of which following/avoiding others is only one single element). Secondly, it is manifested in a quantitative fashion by our model estimates that a significant dependency exists between individuals' tendency to follow other evacuees' decisions and the decision maker's knowledge (or awareness) of exit attributes (such as the level of congestion around the exits). Results from both experiment types clearly suggest that attribute unawareness increases the likelihood of herding behavior to be displayed. In other words, attribute unawareness (introduced to our trials in the form of exit invisibility) can entirely change the direction at which observing the decisions of moving flows of evacuees impact on individuals' decisions from a negative utility element. As a result of the attribute unawareness (or attribute ambiguity), the impact of the majority's choice can alter from being perceived as potential congestion (thus, increasing the likelihood of avoiding others) to extra sources of information (thus, increasing the likelihood of conforming to the crowd). Perception of congestion relative to distance Referring to Fig 5, one major difference between the HC and RC estimates is the relativelysmaller (absolute) magnitude of the estimate for the coefficient of congestion (CONG) in the RC models compared to those of the HC models. The difference is even clearer when comparing the positions at which the estimates of the coefficient of CONG variable stand in Fig 5 relative to the position of the coefficient for distance (DIST) variable. At a possible explanation, we attribute this to a possible systematic bias in terms of the perception of distance (relative to the perception of congestion) in the HC experiments. When making the exit decisions in the HC experiments, subjects were likelier to underestimate the impact of distances (relative to congestion) as they did not need to actually traverse those distances, whereas an actual effort would become necessary when the decision is made in the field. In other words, it appears that our HC experiment participants have placed less emphasis on minimizing the distance (relative to minimizing the congestion) and have instead found the congestion impact more salient on their decisions than the decisions of the participants in the field-type trials. Decision randomness and utility scales Referring to the 95% confidence intervals presented in Fig 5, another clear difference between the estimates obtained from the HC observations compared to those derived from the RC observations is that the HC estimates appear to be generally more inflated in terms of their absolute magnitudes. This could be attributed to the difference in the scale of the utilities as a relevant factor that should be taken into consideration when comparing estimates of the two choice models derived from two different datasets. The scale at which the coefficients are estimated (relative to the scale of the random error components that is typically normalized to a fixed value [61,62]) is a proxy and a measure of the randomness extent of the decisions. The smaller the estimates' scale, the stronger the role played by the error components and as a result, the more random the choices [67]. From a statistical estimation point of view, the utility scale is not statistically identifiable (separately from the coefficients) based on a single dataset. However, utility scales associated with different data segments (when combining various sources of choice observations) can be estimated on a relative basis. We quantified this effect by adopting the joint estimation method proposed by Fiebig et al. [52] known as the "generalized multinomial logit" (GMNL) that can distinguish between the estimated scale associated with different data segments when different data sources are pooled. We basically normalized the utility scale associated with the RC observations to unity (or equally, we normalized the utility scale dummy variable, δ, associated with the RC segment to 0) and estimated that of the HC segment relative to that. Results (reflected in the statistical significance of the estimated coefficient for "utility-scale dummy variable" in Table 1) confirm that the RC observations are associated with smaller utility scales at a statistically significant level compared to the HC observations. This indicates that decisions in our field-type experiments have been made more randomly than those made in the hypothetical scenarios. In other words, the hypothetical setting seems to have underestimated how randomly people make their navigational escape choices. This clearly could be attributable to a stronger attribute perception error as well as less "time to think" [68,69] when people make their choices in a real crowd compared to when they are provided with a topdown view of the environment and more time to decide in a hypothetical setting. Preference heterogeneity The presence of the coefficient heterogeneity can be detected by the RP-MNL model estimates through inspection of the statistical significance of the estimated standard deviations. In this spirit, the model estimated based on the HC data suggests the presence of significant heterogeneity attached to all utility variables, whereas the model from the field excludes distance and visibility from this list and suggests decision makers as being statistically homogenous in terms of the relative weights they associate to these two utility factors. In this sense, one can say that the effect of heterogeneity has been to an extent overplayed by the HC observations. Model predictions We quantitatively investigate the overall impact of all the differences that we highlighted in previous lines in terms of the estimates obtained from our real and hypothetical choice experiments on predictions of the models. Here, the question of interest is the extent to which predictions of the models derived from these two different forms of observations compare to one another. In other words, there proves to be certain similarities as well as dissimilarities between HC and RC-based models and it is still not clear to what degrees this makes predictions of the two models differ from one another. To address this question, we excluded from the RC observations the element of the dependent variables (i.e. the choice) and set the resultant "choice situations" as the basis of comparisons (without making any use of the choices made by subjects in those observations). The probabilities predicted for each alternative exit in each observed choice situation by each of the four models presented in Table 1 are computed. The paired predictions are then compared and visualized through the scatter plots depicted in Fig 6(A) and 6(B). These figures visualize the points whose coordinates are the probabilities predicted by the two different models represented by their axes (for example, an RC-based model and its counterpart HC-based model) with the y = x line superimposed on the scatterplots to facilitate the comparison. The scatter of the points around the bisector line is a visual measure of how closely the two models represented by the two axes produce their probabilities. Also, the graphs shown in According to these figures, there appears to be a substantial level of agreement between the predictions of the RC and HC-based models. We quantify this effect by computing the correlations and the average differences between each pair of data series presented in the correlations and average differences shown on each graph) clearly suggest the negligibility of the difference that incorporation of the preference heterogeneity into our models has made in terms of their prediction performance (an average difference of 2.7% and a correlation of .99 between the predictions of the models derived from the RC observations, for example). Conclusions and Discussion Discrete-choice models were estimated based on the exit-choice observations collected in two different experimental settings, choices made in response to hypothetical evacuation scenarios and choices made in mock evacuation trials that imitated the hypothetical setting in a more realistic fashion (using an independent sample of participants). The aim and motive of the study was twofold. The study primarily intended to investigate the external validity of the results obtained from the hypothetical decision-making experiments in the context of emergency wayfinding. In addition, prediction models of evacuees' exit choices as well as important behavioral insights were gained from the experimental observations. Similarities and dissimilarities of the econometric estimates inferred from these two forms of choice observations were investigated in terms of the sign, statistical significance, magnitude, person-to-person variations, and behavioral interpretation of the estimated parameters, in addition to the utility scales (that reflect the context-dependence randomness of the decisions). Also, predictions of the models derived from the hypothetical and realistic experiments were contrasted. The model estimation results manifested perfect consistency between the sign and statistical significance of the parameters obtained from the hypothetical and realistic experiments. Also fairly similar estimate patterns were observed when the estimates of the two models were visualized against each other. However, the analyses also highlighted certain discrepancies. The modeling results suggested that spatial distances have been underestimated in the hypothetical setting. Also, it became apparent that people have made their decisions in the realistic setting more randomly compared to the situation where they simply stated their responses to the hypothetical scenarios. Also, in the realistic setting, participants appeared to be less heterogeneous in terms of the relative weight they associate to the various factors contributing to their choice of exit, whereas their hypothetical responses had indicated a more pronounced personto-person variation effect (although, further analyses later revealed that inclusion or exclusion of the coefficient heterogeneity effect plays a negligible role in our model predictions). Nonetheless, comparative analyses highlighted that the overall effect of these discrepancies did not make a drastic difference in the predictions obtained from the models estimated on real and hypothetical choices. Models derived from these two observation types made fairly similar predictions and the type of data used for modelling did not on average make a dramatic difference in the prediction phase (the differences in the produced probabilities averaged nearly 10%). In other words, we could have made by-and-large same predictions using the hypothetical choice observations as we did based on the real choice observations. The estimated models also shed interesting insights into the behavior of human crowds during escapes from confined multi-exit environments. Models suggested that the decisions of evacuees are made based on a tradeoff between a set of variables and single-variable-based decision criteria do not adequately explain their escape decisions. People make tradeoffs between minimizing spatial distances, choosing less congested areas and choosing visible exits, and are also influenced by observing the decisions of others. However, they evaluate the decisions of others differently depending on the presence or absence of ambiguity in the escape environment. We found out that moving flows of crowd headed towards a target that is visible by the decision maker are perceived negatively (i.e. potential congestions causing further delays) whereas similar flows are perceived as positive utility elements (in terms of the central tendency) when attributes of their target is ambiguous to the decision maker (as a result of the exit being invisible). In other words, the introduction of ambiguity proved to make a meaningful difference in the way that the social influence is perceived by decision makers. In more specific terms, flows of fleeing evacuees are not likely to trigger a herding effect when the decision maker is aware of the presence and the attributes of their targets, whereas attribute unawareness increases the likelihood of herding in making navigational decisions. The models resulted from this study are fully generic in variables (i.e. we avoided any variable or major modeling element specific to our experimental setup). Accordingly, they can be readily introduced to a wide range of computer programs that simulate crowd evacuations. Among the models investigated in this work, application of the model estimated on the combined data might be preferred as it benefits from a larger sample of observations. In such case, application of the model at the scale associated with the real choice segment of the data would be recommended as it corresponds to a more realistic experimental setting. The resultant simulation program can be used to evaluate the performance and efficiency of evacuation scenarios in built environments. However, there are certain limitations that need to be taken into consideration for potential prediction purposes. First of all, the validity of our models needs to be further scrutinized in larger venues than what we used for our experimentations. Moreover, the role of context-specific conditions under which our observations (and as a result, our models) were derived should also be taken into account for prediction purposes. That mainly includes occupants' familiarity with the escape environment, the extent of the time pressure and urgency and the general visibility condition in the egress environment. It is implicitly assumed in our models that evacuees are at least partly familiar with the geometry of the escape environment. Acknowledging the well-established role of memory in navigational decision-making [70][71][72] we expect some behavioral differences when evacuation of human crowds who are unfamiliar with their surrounding environment is concerned. Also, although we took certain measures to make the field experiments as realistic as possible, the role of the time pressure and urgency level that might cause high levels of fear and stress in extreme emergency scenarios had to be downplayed in this work. We assume, it might be the case that as a result of an extreme level of urgency (e.g. acute fear state), people may attend to a smaller set of factors than what our models suggested and/or assign different relative priorities to those factors. Similarly, it could be hypothesized that when the general visibility condition is reduced in the escape environment, as a result of smoke for example, certain factors may play a more pronounced role in modulation of the navigational decisions and as a result, different behavior might emerge. In particular, considering our finding as to the strong association between the perception of the social influence and the ambiguity level in making navigational decisions, we assume that visually ambiguous evacuation situations could be a context where tendency to exhibit herding behavior can be stronger than what we observed in our experiments (in which the general vision was assumed intact). Investigating how changes in these context-specific factors may impact on the escape behavior can open many more research questions to be addressed by future studies. Yet at this stage, acknowledging potential consequences of misguided modeling in this safety-related topic, we should recommend cautions about extrapolating the models and findings of this study to the contexts to which they do not belong. Also, the fact that we found similarities between the findings we earned from the hypothetical and realistic decision settings (more than the level we expected) does not necessarily indicate possible generalizations to all decision contexts, nor is it intended to downplay the importance of real-world data and field experimentation. Our findings could appear promising as to the potential relevance of the hypothetical choice observations when real or field data is prohibitively inaccessible (at least for the decision contexts of no financial consequence). Yet, the finding of this single study would hardly provide conclusive evidence for making robust generalizations in terms of the reliable applicability of this experimentation method to all contexts of decision-making. We see data collected in hypothetical and realistic settings as complements and our work was even intended to encourage collection of more field-type observations rather undermining their importance. We believe this may be the only way through which one can make solid conclusions as to the limits of the applicability and generalizability of the lab experiments. We believe that more comparative evidence of this type may help researchers to map out boundary conditions and the type of problems that fall beyond the range of the generalizability of the hypothetical decision experiments. In a recent meta-analysis study for example, Herbst and Mas [73] demonstrated great reliability of laboratory-generated findings in the context of "productivity spillover" at quantitative levels. The accumulation of such evidence might at least provide researchers with convincing grounds to rely on the idea of "some number better than no number" [49] in situations and contexts where no real observations are available to the analyst. Also, as a different perspective it might be of the interest of cognitive neuroscientists to provide more evidence as to how differently real and hypothetical decisions are processed and encoded in the brain in order to explore the cognitive underlying differences of decision making in real versus hypothetical settings. To our knowledge, studies of this type have started to emerge [74] which could provide a new frontier and perspective for advancing the state of the knowledge on this problem. The authors would like to extend their sincere thanks to Prof. Armin Seyfried for his insightful advice on the design and administration of the evacuation trials. Supporting Information The contribution and effort of the undergraduate and postgraduate students (at Monash University and The University of Melbourne) who assisted the authors in extracting choice data from the RC trials raw data is also acknowledged. The constructive comments of two anonymous referees who reviewed this work are also greatly appreciated. Author Contributions Conceptualization: MH MS.
v3-fos-license
2022-09-17T06:16:40.927Z
2022-09-15T00:00:00.000
252309836
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "52abe61ab69906f11c0949f03c8874cd5be771bd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41767", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a6b6fe0f12651ae34f1a2d436fc041c81cbea572", "year": 2022 }
pes2o/s2orc
Validation of the most cost-effective nudge to promote workers’ regular self-weighing: a cluster randomized controlled trial Regular self-weighing is useful in obesity prevention. The impact of nudge-based occupational self-weighing programs in the cluster randomized controlled trial was examined. The primary outcome was regular self-weighing after 6 months, which we used to compute cost-effectiveness. Participants were Japanese local government employees who underwent 1 h workshops after being assigned to one of the three nudge groups. Each group was designed according to the nudges’ Easy, Attractive, Social, Timely framework: quiz group (n = 26, attractive-type nudges), implementation intentions group (n = 25, social-type nudges), and growth mindset group (n = 25, timely type nudges). A reference group (n = 36, no nudges) was also formed. After 6 months, all three interventions were effective for regular self-weighing, with the growth mindset intervention (60.0%) being significantly more effective. The cost-effectiveness of the growth mindset group was 1.7 times and 1.3 times higher than that of the quiz group and the implementation intentions group, respectively. Findings from our study are expected to facilitate the use of nudges for health practitioners and employers, which in turn may promote obesity prevention. Obesity is an important public health issue. The World Health Organization claims that obesity is largely preventable 1 . In Japan, obesity is defined 2 when an individual has a body mass index (BMI) ≥ 25 kg/m 2 , and its prevalence has been increasing among the working generation lately 3 . In 2013, the percentage of men in their 20 s-60 s with a BMI ≥ 25 kg/m 2 was 29.0%, and in 2019, it increased to 35.1% 3 . This is higher than the target percentage of 28% in 2022, as stated in the Japanese government's health promotion policy, "Health Japan 21 (second term)" 3 , which emphasizes the aspect of prevention. For example, one of the indicators is the "Increase in percentage of individuals maintaining ideal body weight" as, to promote obesity prevention, it is essential to control weight, even for people without obesity. Although many weight-loss programs have been developed for obese workers 4,5 , the prevalence of obesity continues to increase. Consequently, obesity prevention programs that include non-obese workers who may not have a strong motivation for obesity prevention are required. Self-weighing is a simple self-monitoring action that can help in obesity prevention. The National Institutes of Health in the U.S. recommends self-monitoring as a crucial component of long-term weight maintenance 6 . Previous studies have shown that frequent self-weighing may be beneficial for weight control and the prevention of weight gain 7 . Daily or weekly self-weighing (hereafter referred to as regular self-weighing) is reported to be correlated with successful weight loss and maintenance 8 . Self-weighing requires very little time and may potentially match the needs of busy workers. However, only 52.4% of the Japanese working generation has been reported to perform regular self-weighing 9 . Regular self-weighing is a typical intertemporal choice (although the costs occur right now, and the effects appear in the future). Nudges can be useful in making rational intertemporal choices. A nudge is "any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives" 10 . One of the nudges' frameworks is the EAST, which comprises the following elements: "Easy" (e.g., simplifying messages), "Attractive" (e.g., attracting attention), "Social" (e.g., www.nature.com/scientificreports/ making a commitment), and "Timely" (e.g., prompting at the best timing) 11 . The Japanese government encourages the use of nudges in the workplace for health promotion 12 , but some companies still do not incorporate them into obesity prevention. One of the obstacles to the widespread adoption of nudges could be the difficulty in choosing the highest-priority EAST framework items. Moreover, cramming four nudge elements into one intervention can cause an information overload, which does not match the "Easy" element. Nudges are costeffective methods 13 , and showing which nudge is the most cost-effective will facilitate health practitioners' and employers' decision-making. This study compared three types of nudge-based programs, the attractive-, social-, and timely type nudges, based on the hypothesis that different nudges will cause changes in self-weighing behavior. We calculated the cost-effectiveness of each intervention according to our previous study 14 . This study aimed to validate the most cost-effective nudge to promote workers' regular self-weighing after 6 months. Methods Research design. A cluster randomized controlled trial was conducted with an intention to treat among three nudge groups. A reference group was additionally created to compare the effects between nudges and information provision. Setting and participants. Participants were employees at the Aomori Prefectural Government Office located in the Shimokita region who had applied for a self-weighing promotion workshop hosted by the public health center. The program was initiated in September 2017, and a final survey was conducted in March 2018. The total duration of this study (6 months) was decided by referring to behavioral maintenance spans, which were defined in the transtheoretical model 15 . Recruitment was conducted 2 weeks before the first workshop using bulletins. Employees who volunteered were enrolled regardless of sex, age, or BMI, provided that they met the criterion of self-weighing less than once per week. Allocation. The employees were working in a section, each of which was positionally remote from another section in their organization. Thus, clusters of participants were formed as section units. Each cluster was numbered and randomly assigned to one of the three nudge groups (quiz, implementation intentions, and growth mindset) using a random number generator. Clustering and allocation were performed by the first author in accordance with the CONSORT 2010 statement 16 . Although the clustering and allocation procedures were not revealed to the participants until the completion of allocation, they were informed that the study was an RCT. Intervention. The programs were designed according to the nudge framework of EAST. All interventions included easy-type nudges. Each nudge group had an assigned type of nudge as its form of intervention. The quiz group mainly used attractive-type nudges, the implementation intentions group used social-type nudges, and the growth mindset group used timely-type nudges. (1) Overview of the workshop 1-hour assembly-type workshops were held at the local government meeting room on separate days in September 2017. Each workshop comprised a general session, an educational session, and a group work session, whose outline is described in the process evaluation paper 14 . In the general session, the moderators, who are health center staff, explained the purpose and ethics of the workshop. They also conducted an educational session on the necessity of obesity prevention, effects of weight recording, and importance of a healthy diet and physical activity. After that, a group work session was held wherein the different interventions were applied. However, for the growth mindset group, the order was changed; first, the general session, second, a group work session, and third, the educational session. (2) Group work sessions (30 min) Group work sessions were conducted in teams of 4-5 participants randomly selected from the roster of each nudge group. (a) Quiz group [attractive-type nudge] A quiz competition was organized among the teams at the workshop, wherein the team members worked together to answer questions on the costs of obesity (e.g., Question: "In 40 year-old women, how much higher are the medical costs for obese people than for people of appropriate weight"? Answer: "approximately 20%" 17 ). Each team discussed their answers for a minute and presented them in turn. The moderator then announced the answer and score. Each team competed for the highest total score from four questions. In this group, the following nudges were designed. (1) An appeal to loss aversion 18 (2) Addition of game elements to reduce stress induction (b) Implementation intentions group [social-type nudge] The participants were instructed to declare to their teams the time and place of self-weighing and how they would reward themselves after self-weighing continuously for 1 month. They were also asked to listen attentively to the other member's declarations, nod, and make sympathetic comments. After that, two moderators demonstrated it as follows: one declared, "I will weigh myself after drying my hair in the www.nature.com/scientificreports/ washroom. If I continue self-weighing, then I will open an expensive bottle of champagne". The other said, "Fantastic! I believe you can do it. " In this group, the following nudges were designed such that: (1) Declaration of implementation intentions 19 leads to commitment, and the positive feedback from others helps to reinforce the commitment. A systematic review reported the positive effect of commitment on weight loss 20 . (2) By listening to the declarations of others, a peer effect will be fostered. (c) Growth mindset group [timely type nudge] Participants were instructed to present their experience of achieving success after making an effort. Any kind of experience could be presented, even those that had nothing to do with self-weighing. The moderators explained the instructions as with the implementation intentions group. After that, two moderators demonstrated it as follows: one said, "When I was a high school student, I was short. I practiced a lot and won the regular player in my club. " The other said, "It must be a fantastic experience. " The program was designed with the nudges: (1) Reminding individuals of their achievements and receiving positive feedback could aid in creating a growth mindset among participants (such as "No matter the kind of a person I am, I can always change substantially") 21 . (2) Changing the order of the session such that the priming effect could work; the participants would accept the necessity of self-weighing positively when they have a growth mindset. Survey of the three nudge groups. The following items were chosen to create a survey in the form of a self-administered questionnaire: (1) Basic characteristics Participant's sex, age, weight, height, BMI, and smoking habits were obtained. (2) Outcomes The primary outcome was the number of subjects who had self-weighing habit after 6 months, which was divided by the estimated cost of each program for the cost-effectiveness analysis. The secondary outcomes were changes in behavioral stage or mindset and weight maintenance. In addition, the following presence factors were determined as they were implicated to have a positive effect on weight management: support from others, recording of regular weight measurements, and use of scales distributed at the workplace [22][23][24] . The questionnaires for the three groups were managed with a linked-anonymized list. The surveys were conducted at four time intervals: immediately before the workshop (T0), immediately after the workshop (T1), and 6 months after the workshop (T3). In the T3 survey, the self-weighing habits 1 month after the workshop (T2) were also asked. The T0 and T1 survey questionnaires were distributed and collected onsite by the moderators. The T2 and T3 survey were distributed by each organization. Participants sealed their questionnaires in envelopes and placed them inside a collection bag; each department submitted their collection bags to the moderators. Reference group. Because of the nature of the workshop, it was difficult to assign applicants to a control group. Therefore, employees from randomly selected organizations of the prefectural government were assigned to the reference group (eligibility criteria: self-weighing frequency was less than once per week). In September 2017, the members of the reference group received a bulletin that contained the same information presented in the educational session conducted in the workshop. The survey questionnaire was administered to the reference group in March 2018 after reminding the participants of their self-weighing habits and weights in September 2017. They were then asked to report their current self-weighing habits and weights. Ethical considerations. The management of all the organizations participating in the study agreed to their employees' participation. The applicants provided written informed consent after receiving a written explanation about the free nature of participation and confidentiality terms. All questionnaires were implemented in an anonymous format. For the three nudge groups, participants received written and verbal notification at the workshop that their responses to the survey would be interpreted as consent. This study was approved by the research ethics committee of the Aomori University of Health and Welfare (Approval no. 1720) and was registered with the UMIN Clinical Trials Registry (UMIN000028143: 08/07/2017). All methods were performed in accordance with the Declaration of Helsinki and adhered to Good Clinical Practice guidelines. All members were given the opportunity to attend a regular weight measurement seminar in 2019, based on this study's results. Distribution of weighing scales to the offices. In the preliminary survey, we found that some employees did not own weighing scales. Therefore, scales were distributed to all the offices, including those in the reference group, to create an environment that facilitates self-weighing. Statistical analyses. Assuming a power of 80%, an α error of 5%, and an effect size of 30%, a sample size of 108 participants (36 in each group) was computed based on the primary outcome. Missing values were excluded Results In the three nudge groups, 84 employees from a total of 246 applied to the self-weighing promotion workshop (Fig. 1). From the preliminary survey, approximately 127 employees did not weigh themselves regularly. In the reference group, the survey was returned by 38 employees. Of these, two did not meet the criteria; thus, 36 were included in the analysis. No adverse events were observed throughout this study. Comparison among outcomes. Table 2 displays the comparative outcomes. At T3, regular self-weighing, the primary outcome, was observed in 26.9%, 32.0%, and 60.0% of the participants in the quiz, implementation intentions, and growth mindset groups, respectively, while it was observed in 2.8% of those in the reference group (p < 0.001). The growth mindset group showed a significantly higher self-weighing habit than the other intervention groups. There were some significant differences in the secondary outcomes. At T3, 49.3% of individuals in the quiz, implementation intentions, and growth mindset groups maintained their weight (48.0%, 41.7%, and 58.3%, respectively), compared with 11.0% in the reference group (p < 0.001). The quiz group showed a significantly lower number of individuals acquiring support from others. In a previous study, the total implementation costs, including labor costs, were $2009, $1755, and $2518 for the quiz group, implementation intentions group, and growth mindset group, respectively 14 ; therefore, the cost per person for regular self-weighing at 6 months was $287 (= $2009/7), $219 (= $1755/8), and $168 (= $2518/15), respectively. This means that the cost-effectiveness of the growth mindset intervention was 1.7 times and 1.3 times higher than that of the quiz and implementation intentions interventions, respectively. Discussion At T3, the three nudge groups had significantly higher rates of self-weighing than the reference group, which indicates that nudges were effective. Further, we considered which nudge was the most cost-effective among the three groups. In this population, it appears that the most cost-effective program was the growth mindset group. One variable potentially explaining the positive result might be the support received by the workers from others; eight www.nature.com/scientificreports/ people received support, and seven continued self-weighing. Several studies showed that support might influence weight management 22,25 . However, this study could not clarify why the implementations intention group was less effective despite receiving more support from others. The growth mindset group's intervention might promote participants to think: "I can do self-weighing if I get support from others. " The effects could be explained by the synergy of priming and a growth mindset. Priming may change people's subsequent behavior if they are first exposed to certain information 26 . People primed for stereotypes tend to behave as stereotyped 27 . In the initial group work, the priming was designed to break stereotypes such that the participants have an enhanced growth mindset. This could be a good opportunity to make the participants more receptive to the educational session and to boost self-weighing. The order of sessions is important; the positive priming might not have worked if the educational session had been held before the group work. This implies that interventions should be designed from the viewpoint of optimal timing. The quiz group had the highest cost per person for regular weight measurement; hence, this might not be the most viable program. Moreover, the quiz group alone had no support from others. The quiz program was not interactive, in contrast with the other two intervention programs, which involved sharing positive comments after declarations or presentations by the participants. Interactive programs may be important in obtaining support from others. The implementation intentions intervention was reported to be cost-effective in the occupational health activity promotion 13 but had less impact than the growth mindset program in this setting. There are some limitations to this study. First, the analyzed set of nudge groups was smaller than the estimated sample size. Therefore, the hypothesized differences between the groups might not have been detected. Second, assessments were based on the self-reported recollection of participants who weighed themselves using household scales; thus, the risk of response/recall bias might not be low. To minimize this bias, participants' weights should be recorded objectively and uniformly in the study, and their data must be documented. However, the recording could be a bottleneck for many participants and was therefore not implemented. Third, the effect of seasonal bias may have impacted weight maintenance among the participants. For example, seasonal variations of metabolic Table 1. Baseline characteristics of participants [mean ± SD, n (%)]. Missing values were excluded from the analysis. Analysis of variance was used for age, weight, and BMI, and the Kruskal-Wallis test was used for the categorical data of appropriate weight distribution, stage of behavioral change, and frequency of regular weight measurement, while χ 2 test or Fisher's exact test (items less than five) was used for the other data. a Current habitual cigarette smokers. b Those not interested in regular self-weighing were included in the precontemplation stage; those who thought they may try but not right now, in the contemplation stage; and those who wanted to implement regular self-weighing immediately, in the preparation stage. c Those who answered, "Somewhat agree" or "Agree" to the question, "What do you think about the opinion that 'you can change your lifestyle even now if you want to'?". www.nature.com/scientificreports/ syndrome prevalence have been reported in Japan 28 . To overcome this limitation, a 1 year follow-up study would be required. Fourth, we did not enquire whether the participants had any obesity-related diseases (e.g., hypertension, dyslipidemia, and diabetes). Even if some participants might have had such diseases, we assumed that since they were randomly assigned to each group, there would be no major problems. As the employees' disease status information is generally securely administrated in the human resource section, it may not be appropriate to obtain such information at the workshop held by an external body. In fact, many workers hesitated to provide information on the matter in the preliminary survey. Fifth, the participants who applied for the workshop might have had a strong motivation for self-weighing; therefore, because of the selection bias, the participants are probably not representative of the general working population in Japan. Sixth, we could not create a control group. The reference group was provided information on the importance of self-weighing, and we could not estimate the effect without interventions. Further studies to overcome these limitations are warranted. This study has two strengths. First, it involved a comparison with the reference group. Many studies of nudges in public health lifestyle interventions do not have a control group 29 . Using a reference group can be useful when creating a control group is difficult. Owing to the reference group in this study, we found that nudge-based workshops were more effective than printed materials. Second, the results were analyzed from the viewpoint of cost-effectiveness. This will help practitioners and employers who need simple and low-cost methods to enable healthy choices among their employees 30 . This study's target behavior and interventions were simple: "selfweighing" and a "1 h workshop," respectively. Demonstrating the cost-effectiveness of simple behaviors and interventions is expected to facilitate the use of nudges for health practitioners and employers, which in turn may help achieve the obesity prevention goals of the Japanese government 3 . Our findings will provide the basis for further studies regarding obesity prevention. Data availability The datasets generated during and/or analyzed during the current study are available here: https:// 1drv. ms/x/ s!AvBsm vvF_ CGNgw 93p-y5JwI gdsFL?e= rM9eQn. indicates a significant difference in the multiple group test (1.96 > p or − 1.96 < p in the residual analysis for the χ 2 test and p < .05/6 = .008 in the Bonferroni method test for Fisher's exact test). b Those who regularly weigh themselves at least once a week. c Those who progressed to the stages of change (including those who started regular self-weighing) compared with T0. d The reference group was significantly low. e The growth mindset group was significantly high in the three groups. f Those who did not gain weight for 6 months from T0. g Those who answered, "Yes" or "Yes, a little" to the question, "During the past 6 months, did you get support from the people around you regarding self-weighing?". h The implementation intentions group was significantly higher than the quiz group. i Those who answered "Almost continuously" or "Sometimes" to the question "During the past 6 months, did you keep a record of your regular weight measurements?". [T0] Individuals selfweighing regularly b 0 (0) 0 (0) 0 (0) 0 (0) --
v3-fos-license
2023-04-23T15:18:12.146Z
2023-06-01T00:00:00.000
258277357
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.jisa.2023.103497", "pdf_hash": "5272650926dba9bcc4e32dc5465b712b7b190ce2", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41770", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "64d0fd53a41c70fecc35c648d2a575b10ce1228d", "year": 2023 }
pes2o/s2orc
Probability elicitation for Bayesian networks to distinguish between intentional attacks and accidental technical failures . Introduction Modern societies rely on proper functioning of Critical Infrastructures (CIs) in different sectors such as energy, transportation, and water management which is vital for economic growth and societal wellbeing.Over the years, CIs have become over-dependent on Industrial Control Systems (ICSs) to ensure efficient operations, which are responsible for monitoring and steering industrial processes as, among others, electric power generation, automotive production, and flood control.ICSs were originally designed for isolated environments [1].Such systems were mainly susceptible to technical failures.The blackout in the Canadian province of Ontario and the North-eastern and Mid-western United States is a typical example of a technical failure in which the absence of alarm due to software bug in the alarm system left operators unaware of the need to redistribute power [2].However, modern ICSs no longer operate in isolation, but use other networks to facilitate and improve business processes [3].This increased connectivity, however, makes ICSs more vulnerable to cyber-attacks apart from technical failures.A cyber-attack on a German steel mill is a typical example in which adversaries made use of corporate network to enter into the ICS network [4].As an initial step, the adversaries used both the targeted email and social engineering techniques to acquire credentials for the corporate network.Once they acquired credentials for the corporate network, they worked their way into the plant's control system network and caused damage to the blast furnace. It is essential to distinguish between attacks and technical failures that would lead to abnormal behaviour in the components of ICSs and take suitable measures.In most cases, the initiation of response strategy presumably aimed at technical failures would be ineffective in the event of a targeted attack and may lead to further complications.For instance, replacing a sensor that is sending incorrect measurement data with a new sensor would be a suitable response strategy to technical failure of a sensor.However, this may not be an appropriate response strategy to an attack on the sensor as it would not block the corresponding attack vector.Furthermore, the initiation of inappropriate response strategies would delay the recovery of the system from adversaries and might lead to harmful consequences.Noticeably, there is a lack of decision support to distinguish between attacks and technical failures. Bayesian Networks (BNs) have the capacity to tackle this challenge especially based on their real-world applications in medical diagnosis [5] and fault diagnosis [6][7][8].In addition, there are other BN-based applications in domains like resilience engineering [9], structural systems [10].BNs belong to the family of probabilistic graphical models [11], consisting of a qualitative and a quantitative part [12].The qualitative part is a Directed Acyclic Graph (DAG) of nodes and edges.Each node represents a random variable, while the edges between the nodes represent the conditional dependencies among the random variables.BN structure modelling includes determining nodes and relationships between the determined nodes [13].The quantitative part takes the form of a priori marginal and conditional probabilities so as to quantify the dependencies between connected nodes.BN parameter modelling includes specifying prior marginal and conditional probabilities [13]. In order to address the above-mentioned research gap, we developed a framework in our previous work to help construct BN models for distinguishing attacks and technical failures [14].Furthermore, we extended and combined fishbone diagrams within our framework for knowledge elicitation to construct the qualitative part of such BN models.However, our previous work lacks a systematic method for knowledge elicitation to construct the quantitative part of such BN models.This present study aims to provide a holistic framework to help construct BN models for distinguishing attacks and technical failures by addressing the research question: "How could we elicit expert knowledge to effectively construct Conditional Probability Tables of Bayesian Network models for distinguishing attacks and technical failures?".The research objectives are: • RO1.To propose an approach that would help to effectively construct Conditional Probability Tables (CPTs) for our application.• RO2.To demonstrate the proposed approach using an example in the water management domain. Empirical data is one of the data sources utilised to populate CPTs of BN models in cyber security [15].This empirical data can be extracted from specific sources like cyber security incidents database, technical failure reports, and red team vs. blue team exercises.However, in the water management domain in the Netherlands, there is no/limited cyber-attacks on their infrastructures [16].In addition, information corresponding to limited cyber-attacks and technical failure reports are not shareable due to the sensitivity of data [16].Furthermore, red team vs. blue team exercises were not possible due to practicalities, especially there is a lack of testbeds which could facilitate such exercises in the Netherlands [16].Expert knowledge is one of the predominant data sources utilised to populate conditional probability tables (CPTs) especially in domains where there is a limited availability of data like cyber security [15].Probability elicitation is the most challenging part of constructing BN models especially when it relies on expert knowledge as we need to elicit probability for every possible combination of parent variables state to complete the CPT of a child variable from experts.The CPT size of a child variable grows exponentially with the number of parents.For instance, the CPT size of a binary child with 5 binary parents is 64 (2 5+1 ) entries.The burden of probability elicitation could be reduced by: (i) reducing the number of conditional probabilities to elicit by imposing structural assumptions, and (ii) facilitating individual probability entry by providing visual aids to help experts answer elicitation questions in terms of probabilities [17].We evaluate several techniques for reducing the number of probabilities to elicit, and conclude that DeMorgan models is most suitable for our purpose [18].Furthermore, we review several methods for facilitating individual probability entry and conclude that probability scales with numerical and verbal anchors is most appropriate for our application [19,20]. The main contributions of this paper are as follows: (i) we propose an approach involving DeMorgan model and probability scales with numerical and verbal anchors that could help to effectively construct quantitative part (CPTs) of BN models for distinguishing attacks and technical failures.(ii) we demonstrate the proposed approach using an example in the water management domain to mainly show which parameters need to be elicited from experts and corresponding questions that needs to be asked in addition to how the rest of the probabilities in the CPTs are computed.This paper is not about "anomaly detection" (i.e., detecting whether an anomaly has occurred or not), but rather "diagnosis" (i.e., pinpointing whether the detected anomaly is due to cyber-attack or technical failure).Diagnosis is prevalent in medical and safety domains [21,22].Furthermore, we utilised Design Science Research (DSR) method to tackle our RQ, which is widely used to create artefacts [23].An artefact is defined as an object made by humans for the purpose of solving practical problems like distinguishing attacks and technical failures [24].An artefact could be a construct (or concept), a model, a method, or an instantiation [25].The practical problems can be solved using artefacts in numerous cases.There are five main phases in the DSR process: (i) problem identification, (ii) design and development, (iii) demonstration, (iv) evaluation, and (v) communication.In the problem identification phase, we gather constraints and high-level requirements using semi-structured interviews and focus group sessions with experts in safety and/or security of ICS in the water management domain in the Netherlands.The list of questions which we asked the experts in addition to constraints and requirements are provided in the Appendix.These constraints and high-level requirements are mainly for developing our holistic framework which would then help to construct BN models for distinguishing attacks and technical failures and their evaluation.This phase results in a set of high-level requirements and constraints based on the responses from experts, which are mainly used as an input for the "design and development" and "evaluation" phases of the DSR process.This paper corresponds to the "design and development" and "demonstration" phases in the DSR process.However, evaluating the performance of the proposed approach is outside the scope of this study, which corresponds to the "evaluation" phase in the DSR process.Our related work that has already been published corresponds to the "evaluation" phase in the DSR process [16].The set of constraints and high-level requirements mentioned in the Appendix plays an important role in structuring the problem space and deriving design decisions systematically.This is used as a basis for the "design and development" and "evaluation" phase of the DSR process. The remainder of this paper is structured as follows.In Section 2, we illustrate the different layers and the components of an ICS and describe a case study in the water management domain that is used to demonstrate our proposed approach.In Section 3, we describe our existing framework in addition to a systematic method for knowledge elicitation to construct the qualitative part of BN models for distinguishing attacks and technical failures.Section 4 provides an essential foundation of techniques to reduce the burden of probability elicitation to construct the quantitative part of BN models for distinguishing attacks and technical failures.In Section 5, we demonstrate the proposed approach using an example in the water management domain.Section 6 presents discussion followed by the conclusions and future work directions in Section 7. Industrial control systems In this section, we illustrate the three different layers and major components in each layer of an ICS.Furthermore, we provide an overview of a case study in the water management domain. ICS architecture Domain knowledge on ICSs is the starting point for the development and application of our proposed approach.A typical ICS consists of three layers: (i) Field instrumentation, (ii) Process control, and (iii) Supervisory control [26], bound together by network infrastructure, as shown in Fig. 1. The field instrumentation layer consists of sensors (S i ) and actuators (A i ), while the process control layer consists of Programmable Logic Controllers (PLCs)/Remote Terminal Units (RTUs).Typically, PLCs have wired communication capabilities whereas RTUs have wired or wireless communication capabilities.The PLC/RTU receives measurement data from sensors, and controls the physical systems through actuators [27].The supervisory control layer consists of historian databases, software application servers, the Human-Machine Interface (HMI), and the workstation.The historian databases and software application servers enable the efficient operation of the ICS.The low-level components are configured and monitored with the help of the workstation and the HMI, respectively [27]. Case study overview This case study overview is based on a site visit to a floodgate in the Netherlands.Some critical information has purposely been anonymised for security concerns.This case study is also used in our previous work [14].Fig. 2 schematises a floodgate being primarily operated by Supervisory Control and Data Acquisition (SCADA) system along with an operations centre. Fig. 3 illustrates the SCADA architecture of the floodgate.The sensor (S 1 ), which is located near the floodgate, is used to measure the water level.There is also a water level scale which is visible to the operator from the operations centre.The sensor measurements are then sent to the PLC.If the water level reaches the higher limit, PLC would send an alarm notification to the operator through the HMI, and the operator would need to close the floodgate in this case.The HMI would also provide information such as the water level and the current state of the floodgate (open/close).The actuator opens/closes the floodgate.This case study would be used to demonstrate our proposed approach that would help to effectively construct CPTs involving domain experts. Framework for distinguishing attacks and technical failures This section describes the proposed framework including extended fishbone diagrams in our previous work with an example that could help to construct qualitative part (DAG) of BN models for distinguishing attacks and technical failures [14], which corresponds to structural modelling of BNs. The framework consists of three layers as shown in Fig. 4, which mainly shows different type of variables (i.e., contributory factors, problem, and observations (or test results)) and their relationships.The middle layer consists of a problem variable which is the major cause for an abnormal behaviour in a component of the ICS (observable problem).In the example shown in Fig. 4, we considered "Sensor (S 1 ) sends incorrect water level measurements" as the problem, which is observable.For instance, this problem could be observed by comparing the water level measurements sent by the sensor (S 1 ) against the measurements in the water level scale.We considered the major causes of the problem (intentional attack and accidental technical failure) as the states of the problem variable.In our work, we assume that problem (example: "Sensor (S 1 ) sends incorrect water level measurements") is already identified by the operator.The scope of our proposed approach is to distinguish between the major causes (i.e., intentional attack vs. accidental technical failure). The upper layer consists of factors contributing to the major causes of the problem.For instance, the factor "Weak physical access-control" contributes to "Sensor (S 1 ) sends incorrect water level measurements" due to intentional attack, whereas "Lack of physical maintenance" contributes to "Sensor (S 1 ) sends incorrect water level measurements" due to accidental technical failure.The lower layer consists of observations (or test results) which is defined as any information useful for determining the major cause of the problem based on the outcome of tests.For instance, the outcome of the test whether "Sensor (S 1 ) sends correct water level measurements after cleaning the sensor" would provide additional information to determine the major cause (accidental technical failure or intentional attack) of the problem accurately. The framework which we proposed in our previous work includes a systematic method based on fishbone diagrams for knowledge elicitation to construct the qualitative part of BN models [14].We adopted this approach because there are challenges to solely rely on BNs for knowledge elicitation to construct the qualitative part of BN models.It is not easy-to-use for knowledge elicitation involving domain experts as it could be time-consuming for elicitors to explain the notion of BNs [14].Furthermore, it could not encourage and guide data collection by showing where knowledge is lacking as it is not well-structured.On the other hand, fishbone diagrams help to systematically identify and organise the possible contributing factors (or sub-causes) of a particular problem [28][29][30][31][32].We extended fishbone diagrams to incorporate observations (or test results) in our previous work, which needs to be elicited for our application in addition to contributory factors. Fig. 5 shows an example extended fishbone diagram which consists of a problem ("Major cause for sensor (S 1 ) sends incorrect water level measurements"), contributing factors (or sub-causes) sorted and related under different categories on the left side of the problem.Each category on the left side of the problem represents the major causes of the problem (intentional attack and accidental technical failure).Our example shows that "Lack of physical maintenance" is the contributing factor to the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to accidental technical failure.Furthermore, the observations (or test results) on the right side of the problem would provide additional information to determine the major cause of the problem accurately.Each category on the right side of the problem are used for reference to elicit observations (or test results) that would be useful for determining the particular major causes of the problem [14].Our example shows that the observation "abnormalities in other locations" would increase the probability of the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to intentional attack. Once the extended fishbone diagram is developed, it would be translated into a corresponding qualitative BN model based on the mapping scheme in our previous work [14].However, the proposed framework lacked a systematic method for knowledge elicitation to construct the quantitative part of BN models (the CPTs), which we address in the current work. Techniques for reducing the burden of probability elicitation Probability elicitation is a challenging task in building BNs, especially when it relies heavily on expert knowledge [17].The extensive workload for experts in probability elicitation could affect the reliability of elicited probabilities.However, the workload for experts in probability elicitation could be reduced by reducing the number of conditional probabilities to elicit and facilitating individual probability entry. Technique for reducing the number of conditional probabilities to elicit This section analyses well-known techniques and describes the most suitable technique for our application, which would help to reduce the number of conditional probabilities to elicit. In order to reduce the number of conditional probabilities to elicit, we could exploit the causal independence models [17].Causal independence refers to the situation where the contributory factors (causes) contribute independently to the problem (effect) [33].By utilising these models, only a number of parameters that is linear in the number of contributory factors is needed to be elicited to define a full CPT for the problem variable as the total influence on the problem is a combination of the individual contributions [34].As an example, we shall consider the BN model depicted in Fig. 4, where the problem variable (Y) is a binary discrete variable with the states "Intentional Attack" and "Accidental Technical Failure".In the CPT shown in Fig. 6, Y = "Intentional Attack" denotes Y = "True", and Y = "Accidental Technical Failure" denotes Y = "False".We translated the states of Y into "True" and "False" to comply with the inherent assumptions of the noisy-OR model with regard to the states of variables.The typical state of each variable in the noisy-OR model is "False".For instance, the typical state of a child variable (Fever) in the noisy-OR model is "False" as it is normal.Therefore in our application, we assigned Y = "False" for Y = "Accidental Technical Failure" as this is the a priori expected major cause, based on the higher frequency of technical failures compared to the intentional attacks [14]. In our application, we are dealing with a combination of promoting and inhibiting influences.In case of a promoting influence, the presence (or absence) of the parent will result in the child event with a certain probability.When the parent is absent (or present), it is certain not to cause the child event.In other words, the presence (or absence) of the contributory factor will result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "intentional attack" with a certain probability as it denotes "True" state.For instance, the presence of "Weak physical access-control" will result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "intentional attack" with a certain probability, whereas the absence of "Weak physical access-control" will not certainly result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "intentional attack".This type of promoting influence is defined as a cause [18].On the other hand, the absence of "Sensor data integrity verification" will result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "intentional attack" with a certain probability, whereas the presence of "Sensor data integrity verification" will not certainly result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "intentional attack".This type of promoting influence is defined as a barrier [18]. In case of an inhibiting influence, the presence (or absence) of the parent will inhibit the child event with a certain probability.When the parent is absent (or present), it is certain not to inhibit the child event.In other words, the presence (or absence) of the contributory factor will result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "accidental technical failure" with a certain probability as it denotes "False" state.For instance, the presence of "Lack of physical maintenance" will result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "accidental technical failure" with a certain probability, whereas the absence of "Lack of physical maintenance" will not certainly result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "accidental technical failure".This type of inhibiting influence is defined as an inhibitor [18].On the other hand, the absence of "Well-written maintenance procedure" will result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "accidental technical failure" with a certain probability, whereas the presence of "Well-written maintenance procedure" will not certainly result in the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "accidental technical failure".This type of inhibiting influence is defined as a requirement [18]. Our example BN model shows that it possesses a mixture of promoting and inhibiting influences (causes and inhibitors) especially with regard to the interaction between the contributory factors and the problem.Therefore, we need a technique that would help to model opposing influences as we deal with a mixture of promoting and inhibiting influences in our application, which would help to reduce the number of conditional probabilities to elicit. We analysed several techniques and chose the most suitable technique for our application which would be described in Section 4.1.1.The description of techniques that are unsuitable for our application can be found in Appendix which includes the noisy-OR model and Causal Strength (CAST) logic.The noisy-OR model is one of the most commonly used causal independence models which helps to reduce the number of conditional probabilities to elicit [5,35].The noisy-OR model inherently assumes binary variables [36].The noisy-MAX model is an extension of the noisy-OR model which is suitable for multi-valued variables [37].We analysed the noisy-OR model as we deal with only binary variables in our application. The noisy-OR model assumes that the properties of exception independence and accountability hold true [38].In case all the modelled contributory factors of the problem ("Sensor (S 1 ) sends incorrect water level measurements") are false, the property of accountability requires that the problem be presumed false ("Sensor (S 1 ) sends incorrect water level measurements" due to "accidental technical failure").However, this would not work for inhibiting influences such as "Lack of physical maintenance" in the noisy-OR model as shown in Fig. 6.In case "Lack of physical maintenance" is absent, it is certain not to inhibit the problem which is incompatible with the property of accountability.Therefore, we found that the noisy-OR model is unsuitable for the purposes of our application because the property of accountability does not hold true. Alternatively, CAST logic is one of the techniques mainly developed for modelling opposing influences [39].CAST logic assumes all the variables in the model are binary.The parameters which need to be elicited to completely define CPTs using CAST logic are: (i) causal strengths for each edge, and (ii) baseline probability for each variable.The baseline probability of a parent variable can be interpreted as the prior probability of the corresponding parent variable.However, it would not be appropriate to interpret the baseline probability of the child variable as a prior probability or a leak probability, as the parent variables have no state in which they are guaranteed to have no influence on the child variable [40].As the definition of baseline probability of child variable is not clear, we cannot formulate appropriate question to elicit baseline probability of child variable.This is the major disadvantage of CAST logic which resulted in the lack of practical applications [18,40].We conclude that neither the noisy-OR model nor the CAST logic is suitable for the purposes of our application. DeMorgan model As an alternative to the previously discussed models, the DeMorgan model can potentially be used to tackle the challenge of modelling opposing influences, which would help to reduce the number of conditional probabilities to elicit.This section explains the DeMorgan model.This section corresponds to parameter modelling of BNs, which show parameters that needs to be elicited from experts and corresponding questions that needs to be asked to experts during this elicitation process in addition to how the rest of the parameters in the CPTs are computed. The DeMorgan model is a technique mainly developed for modelling opposing influences, which would help to reduce the number of conditional probabilities to elicit [18,40].The DeMorgan model is applicable when there are several parents and a common child.The DeMorgan model inherently assumes binary variables.The DeMorgan model assumes that one of the two states of each variable is always the distinguished state as shown in Fig. 7. Usually such state of the child variable depends on the modelled domain [41].This is a typical state of the corresponding child variable [42].In case the child variable consists of two states ("disease", "no disease") in the medical domain, the distinguished state of the corresponding child variable is chosen as "no disease" as it is normal [41].In our application, the distinguished state of the problem variable ("Major cause for sensor (S 1 ) sends incorrect water level measurements") is chosen as "accidental technical failure" as this is the a priori expected major cause, based on the higher frequency of technical failures compared to the intentional attacks [14].The distinguished state of a parent variable is relative to the type of causal interaction with the child variable [18].The same parent variable can have different distinguished states in different interactions that it participates in with the different child variables. There are 4 different types of causal interactions between an individual parent (X) and a child (Y) in the DeMorgan model: (i) cause, (ii) barrier, (iii) inhibitor, and (iv) requirement. (i) Cause: X is a causal factor and has a positive influence on Y.In this type of causal interaction between an individual parent (X) and a child (Y), the distinguished state of the corresponding parent variable is "False" [18].Consequently, when the parent variable is "False", it is certain not to trigger a change from the typical state of the child variable as shown in Table 1.When the parent variable is "True", it will trigger a change from the typical state of the child variable, with a certain probability (v X ) as shown in Table 1.(ii) Barrier: This is a negated counterpart of cause, i.e., X ′ is a causal factor and has a positive influence on Y.In this type of causal interaction between an individual parent (X) and a child (Y), the distinguished state of the corresponding parent variable is "True" [18].Accordingly, when the parent variable is "True", it is certain not to trigger a change from the typical state of the child variable as shown in Table 2.When the parent variable is "False", it will trigger a change from the typical state of the child variable, with a certain probability (v X ) as shown in Table 2. (iii) Inhibitor: X inhibits Y.In this type of causal interaction between an individual parent (X) and a child (Y), the distinguished state of the corresponding parent variable is "False" [18].As a result, when the parent variable is "False", it is certain not to prevent a change from the typical state of the child variable as shown in Table 3.When the parent variable is "True", it will prevent a change from the typical state of the child variable, with a certain probability (d X ) as shown in Table 3. (iv) Requirement: The relationship between an inhibitor and requirement is similar to the relationship between a cause and Table 1 Type of Causal Interaction: Cause. Table 2 Type of Causal Interaction: Barrier. Table 3 Type of Causal Interaction: Inhibitor. In this type of causal interaction between an individual parent (X) and a child (Y), the distinguished state of the corresponding parent variable is "True" [18].Hence, when the parent is "True", it is certain not to prevent a change from the typical state of the child variable as shown in Table 4.When the parent variable is "False", it will prevent a change from the typical state of the child variable, with a certain probability (d X ) as shown in Table 4. The DeMorgan model is an extension and a combination of the noisy-OR and noisy-AND model which supports modelling the abovementioned types of causal interactions [18].Maaskant et al. modelled promoting influences which includes causes and barriers by mimicking the noisy-OR model [18].Furthermore, Maaskant et al. modelled inhibiting influences which includes inhibitors and requirements by mimicking the noisy-AND model [18].Finally, Maaskant et al. modelled the combination of promoting and inhibiting influences by combining the noisy-OR and noisy-AND model. The property of accountability in the noisy-OR model is applicable to the DeMorgan model with a slight modification as it also exploits causal independence: In case all the modelled parents of the child are in their distinguished state, the property of accountability requires that the child be presumed their distinguished state.However, in many cases, this is not a realistic assumption as it is difficult to capture all the possible parents of the child [34].Specifically, this is not realistic in our example as it is difficult to capture all the possible contributory factors of the problem ("Sensor (S 1 ) sends incorrect water level measurements") due to "intentional attack".In the DeMorgan model, the leak parameter (v XL ) deals with the possible parents of the child that are not previously known and explicitly modelled. In general, the size of the CPT of a binary variable with n binary parents is 2 n + 1 .However, only n + 1 parameters are sufficient to completely define CPT using the DeMorgan model as it exploits causal independence.In the example shown in Fig. 7, only 5 parameters are sufficient to completely define the CPT of child variable (Y) using the DeMorgan model instead of 64 entries.There are 2 different parameterisations for the Noisy-OR model with a leak parameter (the Leaky Noisy-OR model) proposed by Henrion [43] and Diez [37] which are mathematically equivalent.These 2 parameterisations differ only in the type of question that needs to be asked to the experts for knowledge elicitation.Henrion's parameterisation is supported by a question like: "What is the probability that the effect is true given that a cause (X i ) is true and all the modelled causes are false?".On the other hand, Diez's parameterisation is supported by a question like: "What is the probability that the effect is true given that a cause (X i ) is true and all other modelled and unmodelled causes are false?".The DeMorgan model utilised the Diez's parameterisation with a slight modification. We could find the values for required parameters from the experts to completely define CPT using the DeMorgan model based on appropriate question for each type of causal interaction detailed below: (i) The leak parameter: To find the value for the leak parameter, the elicitor could ask experts: "What is the probability that the child is in their non-distinguished state given that the parents are in their distinguished states?".In our example shown in Fig. 7, the elicitor could ask experts to find the value for parameter (v XL ): "What is the probability that the major cause for the observed problem (sensor (S 1 ) sends incorrect water level measurements) is intentional attack given that the physical access-control for sensor (S 1 ) is strong, data integrity verification is performed for sensor (S 1 ) data, sensor (S 1 ) is always physically maintained, maintenance procedure for sensor (S 1 ) is well-written?".(ii) Cause: To find the value for parameter corresponding to a cause, the elicitor could ask experts: "What is the probability that the child is in their non-distinguished state given that all the parents are in their distinguished states, except X i and no other unmodelled causal factors are present?".In our example shown in Fig. 7, the elicitor could ask experts to find the value for parameter (v X1 ): "What is the probability that the major cause for the observed problem (sensor (S 1 ) sends incorrect water level measurements) is intentional attack given that the physical access-control for sensor (S 1 ) is weak, data integrity verification is performed for sensor (S 1 ) data, sensor (S 1 ) is always physically maintained, maintenance procedure for sensor (S 1 ) is well-written, and no other unmodelled causal factors are present?".(iii) Barrier: To find the value for parameter corresponding to a barrier, the elicitor could ask experts: "What is the probability that the child is in their non-distinguished state given that all the parents are in their distinguished states, except X i and no other unmodelled causal factors are present?".In our example shown in Fig. 7, the elicitor could ask experts to find the value for parameter (v X2 ): "What is the probability that the major cause for the observed problem (sensor (S 1 ) sends incorrect water level measurements) is intentional attack given that the physical access-control for sensor (S 1 ) is strong, data integrity verification is not performed for sensor (S 1 ) data, sensor (S 1 ) is always physically maintained, maintenance procedure for sensor (S 1 ) is well-written, and no other unmodelled causal factors are present?".(iv) Inhibitor: Maaskant et al. did not directly determine the value for parameters corresponding to inhibitors similar to causes and barriers as it is not practical for the example which they considered [40].Specifically, it makes less sense to ask for the effect of presence of parent ("Rain") on the child ("Bonfire"), when the child ("Bonfire") is "False".Therefore, they determined the value for parameter corresponding to each inhibitor by determining the negative influence relative to an arbitrary (non-empty) set of causes/barriers/leak parameter.However, in our application, it is possible to determine the value for parameter corresponding to inhibitors directly as we ask for the effect of presence of parent ("Lack of physical maintenance") on the child ("Major cause for sensor (S 1 ) sends incorrect water level measurements"), when the latter ("Major cause for sensor (S 1 ) sends incorrect water level measurements") is "Accidental technical failure".In order to find the value for parameter corresponding to an inhibitor directly, the elicitor could ask the experts: "What is the probability that the child is in their distinguished state given that the parents are in their distinguished states, except U i and no other unmodelled causal factors are present?".In our example shown in Fig. 7, the elicitor could ask experts to find the value for parameter d U1 : "What is the probability that the major cause for the observed problem (sensor (S 1 ) sends incorrect water level measurements) is accidental technical failure given that the physical access-control for sensor (S 1 ) is strong, data integrity verification is performed for sensor (S 1 ) data, sensor (S 1 ) is not always physically maintained, maintenance procedure for sensor (S 1 ) is well-written and no other unmodelled causal factors are present?".(v) Requirement: Maaskant et al. did not directly determine the value for parameters corresponding to requirements similar to causes and barriers as it is not practical for the example which they considered [40].Specifically, it makes less sense to ask for the effect of absence of parent on the child, when the child is "False".Therefore, they determined the value for parameter corresponding to each requirement by determining the negative influence relative to an arbitrary (non-empty) set of causes/barriers/leak parameter.However, in our application, it is possible to determine the value for parameter corresponding to requirements directly as we ask for the effect of absence of parent ("Well-written maintenance procedure") on the child ("Major cause for sensor (S 1 ) sends incorrect water level measurements"), when the latter ("Major cause for sensor (S 1 ) sends incorrect water level measurements") is "Accidental technical failure".In order to find the value for parameter corresponding to a requirement directly, the elicitor could ask the experts: "What is the probability that the child is in their distinguished state given that the parents are in their distinguished states, except U i and no other unmodelled causal factors are present?".In our example shown in Fig. 7, the elicitor could ask experts to find the value for parameter d U2 : "What is the probability that the major cause for the observed problem (sensor (S 1 ) sends incorrect water level measurements) is accidental technical failure given that the physical access-control for sensor (S 1 ) is strong, data integrity verification is performed for sensor (S 1 ) data, sensor (S 1 ) is always physically maintained, maintenance procedure for sensor (S 1 ) is not well-written and no other unmodelled causal factors are present?". Once we determine the required parameters based on appropriate elicitation questions, we can completely define the CPT of the child variable using (1): In the Eq. ( 1), Y represents the effect variable which has values y for the effect being in the non-distinguished state ("Intentional attack") and y ′ for the effect being in the distinguished state ("Accidental technical failure").X denotes the set of parents which interact with the effect variable as promoting influences, U denotes the set of parents which interact with the effect variable as inhibiting influences, + X denotes the subset of X that contains all parents that are in their non-distinguished states, + U denotes the subset of U that contains all parents that are in their non-distinguished states.v XL denotes the leak parameter which expresses the probability of y ("Intentional attack") given all parents are in their distinguished states, v Xi denotes the probability of y ("Intentional attack") given that the parent X i is not in its distinguished state and all other parents are in their distinguished states, d Ui denotes the probability of y ′ ("Accidental technical failure") given that the parent U i is not in its distinguished state and all other parents are in their distinguished states. We choose the DeMorgan model for our application to reduce the number of conditional probabilities to elicit as they support modelling opposing influences with clear parameterisations. Technique for facilitating individual probability entry This section explains our chosen technique for facilitating individual probability entry for our application. Our systematic method for knowledge elicitation to construct CPTs of BN models would be incomplete without a technique that facilitates individual probability entry.The DeMorgan models would help to reduce the number of conditional probabilities to elicit and allow elicitors to ask appropriate questions during probability elicitation.In addition, there should be a suitable technique which would make it easy for experts to answer elicitation questions in terms of probabilities. There are well-known methods such as probability scale [19,44], and probability wheel [45] which would help to facilitate individual probability entry [17,46].The basis for choosing a particular method includes accuracy, less probability elicitation time, and usability [46].Wang et al. compared three different methods: (i) direct estimation, (ii) probability wheel and (iii) probability scale in terms of their accuracy and time taken to elicit probabilities from experts [47].They pointed out that probability scale is better in terms of both accuracy and probability elicitation time compared to the other two methods. A probability scale can be a horizontal or vertical line with several anchors [46].Fig. 8 shows a probability scale with 7 numerical and verbal anchors [48].However, there are several variants of probability scales which would help to facilitate individual probability entry.Witteman et al. compared 3 probability scales: (i) probability scale with numerical and verbal anchors, (ii) probability scale with only numerical anchors, and (iii) probability scale with only verbal anchors [49].They compared 3 probability scales based on a study with general practitioners in the domain of medical decision making.They concluded that all 3 probability scales are equally suitable to facilitate individual probability entry.However, they recommended the probability scale with numerical and verbal anchors to facilitate individual probability entry as it provides numerical anchors for experts who prefer numbers and verbal anchors for experts who prefer words.Furthermore, Witteman et al. compared 2 different probability scales: (i) probability scale with numerical and verbal anchors, (ii) probability scale with only numerical anchors [50].They compared 2 probability scales based on a study with arts and mathematics students.They concluded that the probability scale with numerical and verbal anchors is more comfortable to use compared to the probability scale with only numerical anchors. There are real-world applications of the probability scale with numerical and verbal anchors in the elicitation of probabilities to construct the quantitative part of BN models [19,44].Van der Gaag et al. used the probability scale with numerical and verbal anchors for a case study in oesophageal cancer [19].This study was conducted with two experts in gastrointestinal oncology.The experts found that this method is easier to use than any other method they used before.Van der Gaag et al. also highlighted that the large number of probabilities are elicited in a reasonable time using this method.Furthermore, Hanninen et al. used the probability scale with numerical and verbal anchors for the construction of quantitative part of collision and grounding BN model [44].This study was conducted with 8 experts who possessed maritime working experience between 3 and 30 years.These studies show that the probability scale with numerical and verbal anchors can be used for facilitating individual probability entry involving experts with different background. We choose probability scales for our application as they are better in terms of accuracy and probability elicitation time compared to other methods.In particular, we would employ the probability scale with numerical and verbal anchors to facilitate individual probability entry in our application as they are effective and practicable based on previous studies.We would utilise the probability scale with 7 numerical and verbal anchors to facilitate individual probability entry with a variation.In our application, the experts could mark the suitable probability among 7 anchors in the scale directly or express fine-grained probabilities using the probability scale with numerical and verbal anchors as a supporting aid to visualise the probability range.This is convenient when the experts would like to express fine-grained probabilities based on historical data which is realistic for accidental technical failures in our application. As a part of the probability elicitation process, in addition to the case outline, we also need to provide information related to the type of floodgate (examplecriticality rating: very high) and context (example threat level: substantial).This guideline would help to avoid very diverse responses over participants as they have substantive information based on the system knowledge.This is evident from our application of the proposed approach [16].Furthermore, it is also important to select appropriate group of experts to elicit probabilities considering the type of floodgate and needed expertise.For instance, in our application of the proposed approach, we relied on experts who have experience working with safety and/or security of ICS in the water management sector in the Netherlands as we dealt with a type of floodgate in the Netherlands [16]. Finally, focus group workshop is one of the approaches that can be used to facilitate the probability elicitation process in addition to questionnaire [16].The use of focus group workshops can also help to facilitate discussion among the participants once we gather the responses from each of them on the reasoning behind the varied probabilities which they provided for some variables (if any) [16].These mechanisms would supplement the probability scales with numerical and verbal anchors and allow us to elicit reliable probabilities. Application of the methodology In this section, we use an illustrative case of a floodgate in the Netherlands to explain how we effectively construct CPTs of BN models for distinguishing attacks and technical failures. We considered the upper and middle layer of our framework for the application of our methodology.It is important to reduce the number of conditional probabilities to elicit for the problem variable as a considerable number of contributory factors (upper layer), corresponding to intentional attack and accidental technical failure, typically interact with the problem variable (middle layer), which in turn increases the CPT size of the problem variable exponentially.On the other hand, the conditional probabilities for observations (or test results) (lower layer) would be easy to elicit directly as there is only one problem variable (middle layer) in our framework, which makes the CPT size of an observation (or test result) variable to 4 (2 1+1 ).We shall consider the BN model with the upper and middle layer of our framework depicted in Fig. 7 for the application of our methodology.We considered the problem "Sensor (S 1 ) sends incorrect water level measurements" as it could develop more complex situations in the case of floodgate.In case the floodgate closes when it should not be based on the incorrect water level measurements sent by the sensor (S 1 ), it would lead to severe economic damage, for instance, by delaying cargo ships.On the other hand, in case the floodgate opens when it should not be due to incorrect water level measurements sent by the sensor (S 1 ), it would lead to flooding. The normal text (i.e., text without bold formatting) in Table 5 denotes the explicitly mentioned causal factors that are absent (Example: data integrity verification is performed for the sensor (S 1 ) data, sensor (S 1 ) is always physically maintained, maintenance procedure for sensor (S 1 ) is well-written).This makes the probability elicitation process simple as they do not affect the corresponding probability based on our structural assumptions.The experts could directly read the remaining text (i.e., text with bold formatting) (Example: "What is the probability that the major cause for the observed problem (sensor (S 1 ) sends incorrect water level measurements) is intentional attack given that the physical access-control for sensor (S 1 ) is weak and no other unmodelled causal factors are present?")and mark the answer which could also reduce probability elicitation time. We considered 4 contributory factors to the major causes (intentional attack or accidental technical failure) of the observed problem: (i) Weak physical access-control (X 1 ), (ii) Sensor data integrity verification (X 2 ), (iii) Lack of physical maintenance (U 1 ), and (iv) Well-written maintenance procedure (U 2 ) as shown in Fig. 7 to depict each type of causal interaction.The type of causal interaction between individual parent X 1 and the child Y is cause.The type of causal interaction between individual parent X 2 and the child Y is barrier.The type of causal interaction between individual parent U 1 and the child Y is inhibitor.The type of causal interaction between individual parent U 2 and the child Y is requirement.In this example, we need to elicit only 5 (4 + 1) parameters instead of 32 (2 4+1 ) to completely define CPT for the problem variable.The 5 parameters which we need to elicit are: The values for these 5 parameters could be elicited from experts by providing the appropriate elicitation questions based on the DeMorgan model and the probability scale with numerical and verbal anchors, which could help experts answer in terms of probabilities to elicitation questions as shown in Table 5.The normal text in Table 5 makes the probability elicitation process simple as they do not affect the corresponding probability based on our structural assumptions.The experts could directly read the remaining text and mark the answer for each question in Table 5 which could also reduce probability elicitation time.Suppose the expert marks the answer for v XL as 0.15, v X1 as 0.50, v X2 as 0.25, d U1 as 0.85, d U2 as 0.50.These probabilities are examples to demonstrate the application of the methodology. Once we elicit all the required parameters, we could use (1) to completely define CPT for our example BN model.For instance, we could use (1) to calculate: The number with bold formatting in Table 6 denotes this probability.The completed CPT for the problem variable (Y) is shown in Table 6. Once we complete the CPT for the problem variable, we could define the a priori probabilities for each contributory factor and observation (or test result) by eliciting corresponding probabilities directly from the experts as they are not complicated.An example BN model with corresponding CPTs for each variable is shown in Fig. 9. Once the problem ("Sensor (S 1 ) sends incorrect water level measurements") is observed in the floodgate, the evidence (True/False) contributory factors and observations (or test results) could be set by the operator (or end-user) to determine the major cause for the observed problem.Once the evidence for contributory factors and observations (or test results) is set, the posterior probability of the problem variable would be computed accordingly.Based on the computed posterior probability, the appropriate response strategy could be put in place for the most likely major cause (intentional attack/accidental technical failure) for the observed problem ("Sensor (S 1 ) sends incorrect water level measurements") thereby minimising negative consequences. In the example shown in Fig. 10, we provided the evidence for the contributory factors "Weak physical access-control (X 1 ) = True", "Sensor data integrity verification (X 2 ) = False", "Lack of physical maintenance (U 1 ) = False", "Well-written maintenance procedure (U 2 ) = True", and observation (or test result) "Abnormalities in other locations (Z 1 ) = True", "Sensor (S 1 ) sends correct water level measurements after recalibrating the sensor (Z 3 ) = False".On the other hand, we did not provide the evidence for the problem "Major cause for sensor (S 1 ) sends incorrect water level measurements (Y)" and observation (or test result) "Sensor (S 1 ) sends correct water level measurements after cleaning the sensor (Z 2 )".The BN computes the posterior (updated) probabilities of the other nodes (Y, and Z 2 ) based on the provided evidence.The BN in Fig. 10 determines that the major cause for the observed problem "Sensor (S 1 ) sends incorrect water level measurements" is most likely due to intentional attack as the corresponding posterior probability (0.97306) is higher compared to the posterior probability of accidental technical failure (0.02694). Discussion An example parameter elicitation for the problem variable (Y) without reduced number of conditional probabilities is provided in Table 7).This example helps to highlight key challenges especially in parameters.Furthermore, Zhang et al. also highlighted that reducing the number of conditional probabilities to elicit reduces the uncertainty and bias and improves elicitation accuracy [57].Finally, Zagorecki et al. conducted an empirical study to elicit probabilities under Noisy-OR assumptions in addition to elicit complete probabilities directly from human experts [57].Like DeMorgan structural assumptions, the elicitation of probabilities under Noisy-OR assumptions reduce the number of parameters that need to be elicited from exponential to linear in the number of parents to define a full CPT for the child variable.Based on the empirical study, Zagorecki et al. concluded that the elicitation of probabilities under Noisy-OR assumptions yield better accuracy than the elicitation of complete probabilities directly from human experts [58]. To determine the most critical variables, sensitivity analysis is performed with Y (Major cause for sensor (S 1 ) sends incorrect water level measurements) selected as the target node.The sensitivity levels are shown in Fig. 11.According to the results of the tornado diagram which shows 10 most critical events leading to Y due to intentional attack, "Lack of physical maintenance", "Well written maintenance procedure", "Weak physical access control" were identified as the top three most effective variables.Based on the tornado diagram, "Lack of physical maintenance" is identified as the most influential variable in the occurrence of the studied scenario.This in turn would help to focus on most critical variables during elicitation. Performance-based weighting is one of the systematic approaches that can help to guarantee the accuracy of elicited parameters [51].In this approach, each expert is weighted on their performance in answering calibration (or seed) questions.These are a set of questions from the experts' field that have observed true values and also closely related to the variables of interest [52].The overall weight for each expert can be obtained by multiplying two separate scores, which include statistical accuracy (or calibration) score and information score [53].Accuracy score assesses how close an expert's estimate to the truth value.Furthermore, information score assesses the amount of entropy in what the expert says or in the expert's performance.This overall weight for each expert can then be used to combine multiple expert judgements.Eggstaff et al. highlighted that the performance-based weighting significantly outperforms equally weighting expert judgement [54].There are various applications of performance-based weighting [51,55,56].This can supplement the proposed framework to ensure the accuracy of elicited parameters. Conclusions and future work directions Limited availability of data is one of the key challenges to construct BN models in domains like cyber security which results in modellers depending on expert knowledge.However, BNs are not suitable for knowledge elicitation involving domain experts.In our previous work, we developed a systematic method using fishbone diagrams for knowledge elicitation involving domain experts to construct the DAGs of BN models for distinguishing attacks and technical failures.Noticeably, the systematic method for knowledge elicitation involving domain experts Major cause for sensor (S 1 ) sends incorrect water level measurements (Y) not performed for sensor (S 1 ) data, sensor (S 1 ) is always physically maintained, maintenance procedure for sensor (S 1 ) is well-written?"16 "What is the probability that the major cause for the observed problem (sensor (S 1 ) sends incorrect water level measurements) is intentional attack given that the physical access-control for sensor (S 1 ) is strong, data integrity verification is not performed for sensor (S 1 ) data, sensor (S 1 ) is always physically maintained, maintenance procedure for sensor (S 1 ) is not well-written?"to produce the effect and that the hidden processes that may inhibit the occurrence of the effect are mutually independent [35].In case all the modelled causes of the effect are false, the property of accountability requires that the effect be presumed false, i.e., P(y ′ |x 1 ′ ,x 2 ′ ,…, x n ′ ) = 1.In the noisy-OR model, the effect can be caused by any cause similar to a logical-OR.However, the relationship is not deterministiceach of the causes X i alone can cause the effect with probability p i, which is known as link probability [36]. ′ represents the absence of the other causes except X i .The probability of any combination of active causes can be calculated as: Where X represents all active causes. Causal strength (CAST) logic CAST logic is applicable when there are several parents and a common child as shown in Fig. 2A [39].CAST logic assumes all the variables in the model are binary.CAST logic is only applied in the international policy and crisis analysis domain [41].The interaction between a parent and the common child can be either promoting or inhibiting.The promoting influence is depicted by an arrowhead, whereas the negative influence is illustrated by a filled circle as shown in Fig. 2A. The parameters which need to be elicited to completely define CPTs using CAST logic are: (i) causal strengths (g Xi ,h Xi ) for each arc, and (ii) baseline probability (b) for each variable.The values of causal strengths (g Xi ,h Xi ) are not probabilities and can take any arbitrary values from the range [− 1, 1].The value of causal strength (h Xi ) indicates the change in belief of Y relative to the baseline probability of Y (b Y ) under the assumption that X i is in "True" state.For instance, h X1 indicates how much the presence of X 1 would change our belief of Y. On the other hand, the value of causal strength (g Xi ) indicates the change in belief of Y relative to the baseline probability of effect (b Y ) under the assumption that X i is in "False" state.For instance, g X1 indicates how much the absence of X 1 would change our belief of Y. Once we elicit the above-mentioned parameters, we could apply CAST algorithm for every combination of parent states to completely define the CPT of child variable.CAST algorithm consists of four steps: (i) aggregate positive causal strengths, (ii) aggregate negative causal strengths, (iii) combine the positive and negative causal strengths, and (iv) derive conditional probabilities. In the first step, the positive causal strengths are aggregated using (1A): Where s Xi can be g Xi or h Xi depending on the state of the parent. In the second step, the negative causal strengths are aggregated using (2A): Where s Xi can be g Xi or h Xi depending on the state of the parent. In the third step, the positive and negative causal strengths are combined.The overall influence (O) of all parents is determined using (3A) if S + > =S − and using (4A) if S − < S + : In the final step, the conditional probabilities are derived using (5A) if O j ≥ 0 and using (6A) if O j < 0: Where O j denotes the overall influence of j th combination of parent states X j .Q13.Is it possible to evaluate the developed method in the real water management infrastructure?If so, are there any challenges?Q14.Whether do we have access to system architectures of any real water management infrastructure or not? Constraints (Cs) and requirements (Rs) Based on the responses which we received from the experts to those questions, the following set of constraints and high-level requirements is extracted by manually analysing the interview notes and summarising the essence of the responses: C1.When the operators notice an abnormal behaviour in a component of the ICS, they presume that this is due to a technical failure and initiate corresponding response procedures.The response strategy initiated towards a technical failure is not effective in case of an attack.C2.There is a lack of real data regarding cyber-attacks as they claim that there are no/limited cyber-attacks on their infrastructures.Furthermore, this is not shareable due to the sensitivity of data.C3.Technical failures occur in their infrastructures which are documented as technical failure reports.However, they are also not shareable due to the sensitivity of data.C4.The automation department deals with the technical failures, whereas the security department deals with cyber-attacks in the water management infrastructure.There are experts who have expertise in dealing with both technical failures and cyber-attacks.C5.Experts are limited in this domain with limited time availability.C6.The real water management infrastructure like a floodgate is not available for the evaluation of the developed method due to availability and criticality issues.C7.There are system architectures with specific components which are not shareable due to the sensitivity issues.However, there is a possibility to arrange a visit to a water management infrastructure which could help to understand the system architecture on a high-level.Furthermore, the system architecture needs to be anonymised when publishing it.C8.There is a need for decision support that would help operators to distinguish between intentional attacks and accidental technical failures as it provides input to the decision-makers to choose appropriate response strategy.However, the selection of these response strategies also depends on cost-benefit and feasibility.R1.An effective and practical alternative to data-driven approaches for developing decision support to distinguish between attacks and technical failures is required.R2.Decision support should help operators to distinguish between attacks and technical failures by taking into account real-time system information.R3.The method for developing decision support should facilitate to involve experts from the department that deals with technical failures and the department that deals with cyber-attacks including experts who have expertise in dealing with both technical failures and cyber-attacks.R4.The workload of experts during the knowledge elicitation process for developing decision support to distinguish between attacks and technical failures should be limited.R5.The reliability of knowledge elicited for developing decision support to distinguish between attacks and technical failures should be ensured.R6.The developed decision support should be scalable to different problems in the real environment. S .Chockalingam et al. Fig. 11 . Fig. 11.Tornado Diagram Obtained from Sensitivity Analysis for Major Cause for Sensor (S 1 ) Sends Incorrect Water Level Measurements. Table 4 Type of Causal Interaction: Requirement. Table 5 Parameter Elicitation for the Problem Variable (Y): Example.Major cause for sensor (S 1 ) sends incorrect water level measurements (Y) Table 7 ( continued ) discussion guide Q1.When the operator notices an abnormal behaviour in a component of the ICS, how do they respond to it?Q2.Do you have a mechanism for the operator to determine whether an abnormal behaviour in a component of the ICS is due to attacks or technical failures?Q3.Does the same department deal with the attacks and technical failures?If not, how?Q4.Which functionalities do you think are important in a system which helps to distinguish between attacks and technical failures?Q5.Are there any cyber-attacks reported in your infrastructure?Q6.Are there any technical failures reported in your infrastructure?Q7.Doyou have a repository of technical failure reports?Q8.If so, whether this repository of technical failure reports is available for research or not?Q9.What do you think are the alternate data sources available for research?Q10.What are the challenges you foresee in the alternate data sources you proposed?In addition to risk factors and symptoms based on tests, what are other elements that you would take into account when you diagnose an (intentional) attack on a component?Q12.In addition to risk factors and symptoms based on tests, what are other elements that you would take into account when you diagnose (accidental) technical failure?
v3-fos-license
2021-05-05T00:08:54.534Z
2021-03-18T00:00:00.000
233686603
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/11/6/2739/pdf", "pdf_hash": "1e74107562f745e94953b44e9892be0308513c54", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41772", "s2fieldsofstudy": [ "Engineering" ], "sha1": "08ac62b15fc77cd180aaee8833046ec17539a5b7", "year": 2021 }
pes2o/s2orc
Topology and Control Strategy of PV MVDC Grid-Connected Converter with LVRT Capability : This paper proposes an isolated buck-boost topology and control strategy for the photovoltaic (PV) medium-voltage DC (MVDC) converter with low-voltage ride through (LVRT) capability. The proposed isolated buck-boost topology operates on either boost or buck mode by only controlling the active semiconductors on the low-voltage side. Based on this topology, medium-voltage (MV) dc–dc module is able to be developed to reduce the number of modules and increase the power density in the converter, which corresponds to the first contribution. As another contribution, a LVRT method based on an LC filter for MVDC converter is proposed without additional circuit and a feedback capacitor current control method for the isolated buck-boost converter is proposed to solve the instability problem caused by the resonance spike of the LC filter. Five kV/50 kW SiC-based dc–dc modules and ±10 kV/200 kW PV MVDC converters were developed. Experiments of the converter for MVDC system in the normal and LVRT conditions are presented. The experimental results verify the effectiveness of the proposed topology and control strategy. Introduction In recent years, PV power generation has developed rapidly. At the same time, medium-voltage DC (MVDC) power distribution systems have gradually begun to demonstrate applications. Compared with traditional AC grid-connected systems, PV DC gridconnected systems have fewer conversion links, no reactive power loss, reduced transmission line costs, and no power quality problem such as harmonics. It has the characteristics of low cost and high efficiency [1][2][3]. The PV MVDC grid-connected system is a new type of PV system as shown in Figure 1. The PV DC grid-connected converter is the core equipment in the system. It has the following characteristics: (1) High boost gain. The voltage gain is tens or even hundreds and can be changed in a wide range; (2) Medium DC output voltage [4]; (3) Galvanic separation. For medium-or high-voltage application, galvanic separation should be applied to prevent dc fault propagation [5]; (4) LVRT capability in case of grid voltage dip [6]. The capability of LVRT enables the converter to withstand voltage sags and maintain the connection to the grid and avoid loss of power generation. Power injection during LVRT enhances the voltage of the common coupling point [7]. There are two challenges for the MVDC converter to achieve LVRT. Firstly, the converter should not only step up voltage in normal mode, but also step down voltage when the grid voltage drops. Secondly, the discharge of output capacitor leads to current spike during the process of LVRT. The scheme to limit the current spike should be carried out. Plenty of research works on isolated dc-dc converters have been carried out, but most of them are not fit for MVDC application because of limited ratings of the semiconductors. Series-connected active semiconductors are a solution for MVDC application [8], but it is difficult to keep voltage balance on the switches, especially in high frequency. Modular multilevel dc-dc converter is one solution for MVDC output. A high-frequencylink dc-dc converter based on modular multilevel topology is proposed in [9,10]. However, the voltages of these topologies are stepped up only by transformer and the boosting capacity is limited. As the capacity increases above megawatt level, design and manufacturing of high-frequency transformer (HFT) become more difficult in terms of insulation, cooling [11]. Topology of dc-dc modules with input parallel and output series (IPOS) configuration is another approach for MVDC application. The output voltages of the modules are connected in series to realize the MVDC output of the converter. The capacity and voltage level of the dc-dc modules are greatly reduced relative to the converter. Thereby the difficulty of designing and manufacturing the dc-dc module and HFT is reduced. Dc-dc modules are the core in an IPOS converter. Different module topologies will lead to different characteristics of the converter. In order to realize high-voltage output and increase the power density of the converter, medium-voltage dc-dc module is required to reduce the number of modules. Dual-Active-Bridge (DAB) is a popular topology in solid state transformer. The voltage gain range is widened by optimized modulation method [12], but it is not fit for highvoltage gain application because of low-input voltage utilization and limitation of boosting capacity. High-voltage gain can be achieved in isolated boost topology [13]. However, it cannot work when the output voltage is lower than input voltage, LVRT cannot be realized. In addition, a startup circuit on the high-voltage side is required in boost topology. An isolated buck-boost converter can operate on either buck or boost mode to widen the output voltage range [14]. Not only high-voltage gain, but also wide-voltage range is achieved by the isolated buck--boost converter. Some isolated buck-boost converters are derived by merging a buck converter with a boost converter. A buck cascades a DAB to form a buck-boost topology in [15,16]. A switched-capacitor-based submodule cascades a DAB to form a buck-boost in [17]. The cascaded two stages decrease the efficiency and increase the cost of the converter. The semi-active rectifier-based isolated buck-boost converters are developed in [18,19]. A three-level full-bridge isolated buck-boost converter is proposed in [20] to increase the output voltage. However, active switches should be included on both the primary and secondary sides of the transformer to achieve boost and buck mode in the existing isolated buck-boost converters. The voltage rating of active switches limits the output voltage level of the converter. Several works are done on the dc converter to avoid overcurrent when voltage dip occurs. A practical solution of dc-dc converter based on switched capacitor is proposed in [21]. The switched capacitor can disconnect from the MVDC grid effectively as dc breaker to avoid capacitor discharge during LVRT. A modular hybrid-full bridge dc converter is proposed in [22]. The full-bridge on the MVDC side is composed of modular-fullbridge (MFB). A dc-dc converter with active front end is presented in [23] to avoid dc capacitors of power modules being directly exposed to the MVDC side. These converters avoid overcurrent during LVRT by introducing an additional circuit on the MVDC side, which increases the cost and reduces the efficiency of the converter. Isolated buck-boost topology is a suitable solution for a PV MVDC converter with LVRT capability. In order to increase the density and achieve higher voltage, developing a MV dc-dc module is important for a MVDC converter (output voltage above ±10 kV). It is difficult for the aforementioned isolated buck-boost topologies to develop a MV dc-dc module because active semiconductors are applied on the high-voltage side. A large number of modules are required and the power density of the MVDC converter is decreased. Moreover, these existing solutions for achieving LVRT must involve additional semiconductors circuit, which leads to additional loss and restricts output voltage of the dc-dc modules. In order to solve this problem, this paper proposes a new solution for a PV MVDC grid-connected converter with LVRT capability. Compared to the existing work, this paper's contribution includes: (1) A novel MV isolated buck-boost topology and its modulation method is proposed. By applying only diodes on the high-voltage side, MV dc-dc module can be developed by this topology to reduce the number of modules in the converter and increase the power density. High-voltage gain will be realized in boost mode, and LVRT will be realized in buck mode. (2) The LC filter for MVDC converter is proposed to suppress the overcurrent caused by voltage dip, and the LC filter parameter design method is proposed. Compared with the method mentioned above, the converter efficiency is increased and the cost is decreased. In order to solve the unstability problem caused by the resonance spike of the LC filter, a control algorithm based on feedback capacitor current for the isolated buck-boost converter is proposed to ensure the stability. (3) Experiments of the converter for MVDC system in the normal and LVRT condition are presented. The rest of the paper is organized as follows. The proposed isolated buck-boost topology and its principle modulation method are described in Section 2. The LC filter for the MVDC converter is also discussed in this section. In Section 3, the unstability problem caused by the resonance spike of the LC filter is analyzed. A control algorithm based on feedback capacitor current is proposed. The complete control strategy for the converter is also proposed there. In Section 4, 5 kV/50 kW SiC-based dc-dc modules and ±10 kV/200 kW PV MVDC converters are developed and laboratory tests result are analyzed. Section 5 concludes this paper. In order to achieve MV output, the high-voltage side of the proposed topology only consists of 4 diodes without active semiconductors. The medium-voltage module is able to be developed by employing silicon rectifier stack with high-voltage rating. The control system is simplified without control circuit for the high-voltage side. Proposed Topology Buck and boost modes are achieved by controlling the active semiconductors ( ~ ) on the low-voltage side. Through the coordinated control of the duty cycle of ~ , the voltage of clamping capacitor can be controlled to be either higher or lower than the input voltage. As the capacitor voltage is higher than the input voltage , works and is off. The topology operates in boost mode. In this mode, is the clamping switch, ~ and inductor realize step-up voltage. Otherwise, is on and is bypassed. The topology operates in buck mode. The equivalent circuit of the proposed isolated buck-boost topology is shown in Figure 3. is the leakage inductor of the transformer. In the boost mode, the circuit is able to obtain high-voltage gain by full bridge boost on the low-voltage side and HFT. Not only high-voltage gain, but also a wide range of input voltage is achieved, which meets the requirement of the PV MVDC system. In buck mode, the output voltage range is from 0 to N . LVRT can be realized in buck mode. Zero output voltage start-up is achieved without additional pre-charging circuit. Redundant module smoothly cutting in online in IPOS can also be realized. Modulation of the Proposed Topology A unified modulation method is proposed for the two modes in Figure 4. The phase shift between the same half bridge switches is 180°. The phase shift between the upper or lower arm switches is 180° too. The two lower arm switches of and have the same duty cycle of 0.5. The two upper arm switches of and have the same duty cycle, which is denoted as D. The adjustment range of D is 0 to 1 in ideal condition. The switching sequences of switch and determines the sequence of switch . The circuit operates in buck mode as D is less than 0.5. The circuit operates in boost as D is more than 0.5. There are current continuous mode (boost-CCM) and current discontinuous mode (boost-DCM) according to input inductor current in boost mode. Boost-CCM mode is divided into boost-CCM1 and boost-CCM2 according to the leakage current of the transformer. In buck mode, there are current continuous mode buck-CCM, current discontinuous modes buck-DCM1 and buck-DCM2 according to the leakage inductor current . Usually, the input inductor is designed as current continuous mode. The leakage inductor of HFT is small to reduce the voltage on the leakage inductor. The leakage inductor current is usually discontinuous. Therefore, boost-CCM2 and buck-DCM2 are commonly used modes. Figure 4a,b are the switching sequence diagram in boost-CCM2 mode and buck-DCM2 mode, respectively. Derivation of Voltage Gain In boost mode, the voltage gain can be obtained by volt-second balance of the leakage inductor and input inductor . It can be obtained by volt-second balance of the leakage inductor in buck mode. The voltage gain in boost and buck modes are shown in Table 1. The following is the derivation process of buck-DCM2 and boost-CCM2. In the buck-DCM2 mode, the volt-second balance of leakage inductor is given as: is the current discontinuous time in buck mode as shown in Figure 4b. N is the transformer turn ratio. The output current can be expressed as: is the switching frequency. R is the load. The voltage gain in buck_DCM2 mode _ can be expressed as: In boost-CCM2 mode, the volt-second balance of input inductor is given as: The relationship between and can be expressed as: In the boost-CCM2 mode, the relationship between and can be obtained by Equation (4): The voltage gain in boost-CCM1 mode _ can be expressed as: Mode Voltage Gain Boost According to Table 1, the relationship between voltage gain and duty cycle can be obtained. Figure 5 shows the relationship between voltage gain and duty cycle. The voltage gain is related to . is denoted as = 2 / . As increases, the voltage gain curve becomes flatter. Therefore, the voltage gain is able to increase by reducing the leakage inductor or increasing the load resistance. Table 2 lists the comparison of different isolated topologies related to circuit structure and performance. DAB has limitation of boosting capacity and the output voltage is lower than other topologies. Isolated boost has strong boosting capacity but cannot operate in low-voltage range. In the isolated buck-boost topologies [18] and [20], buck and boost modes are achieved by coordinated control of active switches of the primary and secondary sides. The output voltage cannot reach MV level. Switch drivers and auxiliary power supply is required on the high-voltage side. They must sustain high-voltage stress of common mode and differential mode. These will increase the complexity and reduce the reliability of the converter. In the proposed topology, there are no active switches but only diodes on the high-voltage side. By only controlling the low-voltage side active switches, the topology is able to switch between boost and buck mode. The output voltage of the proposed topology is easy to reach medium-voltage level. MVDC modules are able to be developed. That is important for the MVDC converter to reduce the module number and increase the power density. Meanwhile, only controller and switch drives are needed on the low-voltage side in the proposed topology. The control system is simplified. LC Filter in the Converter A LC filter is designed in the converter. It has two functions: (1) Filter out high-frequency components of output current to obtain a lower output current ripple. (2) Limit the output surge current during the grid voltage drops or dc fault occurs. During the process of LVRT, a surge current will occur due to discharge of the output capacitor. An output inductor is introduced in the converter to limit the surge current. The output inductor and output capacitor form a LC filter, as shown in Figure 6. The relationship between current and voltage of capacitor is described as: where and are the current and voltage of output capacitor , respectively. is the rectifier output current. is the output current of the converter. According to Figure 4, Equation (10) can be expressed as: where is the average voltage of the output capacitor, . is the switching period. where ∆ is the increment voltage of the output capacitor during (1 − ) . Equation (12) is sorted out to be: The maximum value of ∆ is obtained from (13): The capacitor voltage ripple coefficient is defined as = ∆ / . The minimum value of output capacitor is obtained as: As is increased, the output capacitance is reduced. That is benefit for LVRT. However, large voltage ripple on the capacitor will lead to high-voltage stress on the semiconductors. As the switching frequency increased, such as applying a SiC-based semiconductor, the capacitance is dramatically reduced. Design of the Output Inductor The transfer function of LC filter ( ) is described as: is the resonance angular frequency. is the equivalent output capacitance of the converter. It is equal to . is the number of modules in the converter. The main harmonic frequency of is defined as , then The equation is sorted out to be: is defined as the proportion of harmonics current with frequency to the rated output current. In the dc-dc converter, is equal to 4 . Then the minimum vale of output inductor can be described as: During the process of LVRT, the discharge of output capacitor will lead to a surge current. The output inductor should be designed to suppress the surge current. = (21) where is the voltage sag depth of . is the voltage of MVDC grid. Equation (21) is sorted out as: is the voltage falling time. ∆ is the increment current during LVRT. is the allowable overcurrent ratio. The inductance is obtained as: The value of the output inductor should be designed as the maximum one between and . Control Strategy of the Converter The LC filter can effectively decrease the surge current during LVRT, but it will lead to stability problems in the system. An appropriate control strategy should be proposed to solve the problem. The Effect of LC Filter The LC filter is shown in Figure 7a. In the grid-connected system, the transfer function from ( ) to ( ) for the LC filter is: where is the resonance angular frequency. The LC filter has a resonant spike at the resonant frequency, and a phase −180° jump occurs at the same time, which causes the system to be unstable, as shown in Figure 8. In order to solve the problem, damping should be introduced to eliminate the pole. Adding a series resistor R to the inductor L is shown in Figure 7b. The transfer function from ( ) to ( ) for the LCR filter is: ξ is the damping coefficient. Compared with the transfer function of the LC filter, a damping term is introduced in the transfer function of the LCR filter. The pole is eliminated by a proper ξ. The low-frequency and high-frequency amplitude-frequency characteristics of the transfer function have not been changed, as shown in Figure 8. ξ can be increased by increase R or C, or decrease L in (28). In the converter, and have been determined and is much larger than . That will decrease the value of ξ. By increasing the resistance, the damping coefficient can be effectively improved, which will increase the converter loss in practice. At the same time, adding a damping resistor on the high-voltage side will also increase the cost of the converter. Current Control Strategy with Active Damping Passive damping cannot be used in the converter because of large loss. Adding active damping is an effective method to increase damping without additional loss in the control system. Model of Current Closed Loop with LC Filter The current closed loop of the converter with the LC filter is shown in Figure 9. is the output current reference. It is generated by the external voltage loop. ( ) is the output current sample signal, the err signal between and ( ) is sent to the output current PI regulator ( ). The duty ratio ( ) is the result of the PI regulator. The difference of ( ) and ( ) is capacitor current ( ). ( ) is the grid voltage. The difference of ( ) and ( ) is output inductor voltage ( ). According to Figure 10, the small-signal transfer function from the output of the current compensation loop to the voltage of the current sampling resistor can be obtained as: Equation (30) is sorted out to be: where ( ) is the transfer function of output current to duty cycle. Figure 10. Small-signal model in boost mode. Figure 11 is the small-signal model in buck mode, the transfer function of ( ) is given as: Formulae (31) and (33) are the transfer function of boost and buck, respectively. We can find that the two controlled objects are integral links or approximate integral links. The current loop gain expression in Figure 9 is obtained in boost mode as: is the resonance frequency. It can be seen from Formula (34) that there is a resonance point in the transfer function and a −180° phase jump occurs at the same time, which causes the converter to be unstable, as shown in Figure 12. Figure 12. The bode waveform of current loop gain. Adding Active Damping in the Converter As described in 3.1, inductor series resistance is an effective way to increase damping of the control system. In order to add damping without additional loss, an active damping method based on state variable feedback is derived from the passive damping method. Figure 13a is the current control loop with a series resistance R to the inductor . Comparing with Figure 13a, the resistance voltage feedback point is moved to the output of the current regulator ( ). At the same time, the current sampling point is moved forward to the position of capacitor current ( ). The capacitor current is the feedback variable, and the feedback signal is the product of the feedback variable and the feedback function. By subtracting this feedback signal from the duty ratio, the current control loop in Figure 13b could achieve the same effect as series resistance. The feedback function .can be simplified by substituting the transfer function of ( ): The feedback function of the capacitor current is a proportion link, as shown in Equation (36). There is no integral and differential link in the equation, which greatly simplifies the control strategy. Because the control objects are integral link in both modes, the feedback function of the capacitor current is a proportion link for buck mode too. The control equivalent circuit diagram is shown in Figure 14. Then, the active damping control by capacitor current feedback is achieved. System damping can be controlled by adjusting R. where is the damping coefficient and is the resonance angular frequency of transfer Function (38). In order to achieve proper damping without reducing the response speed, the value of the damping coefficient ξ is generally designed as 0.707. The appropriate active damping value can be obtained by Formula (41): = 2 (41) Figure 15 is the bode curve of current control loop before compensation. Ta1 is the loop gain in Equation (34). Ta2 is the loop gain in Equation (38). The resonance point is eliminated in Ta2. Figure 16 shows the loop gain with PI compensation. PI compensators = 0.01, = 0.002 are implemented in the current loop. The gain margin is 12.8 dB, phase margin is 56.4°. Complete Control Strategy The complete control scheme of the converter is given in Figure 17. The control system includes the input voltage loop and the output current loop. In normal, the maximum power point tracking (MPPT) controller outputs the reference of the input voltage . The result of the voltage PI regulator (s) is the reference of the output current . The feedback signal participates in the current closed-loop control. In the process of LVRT, as the depth of the grid voltage drop increases, the output current rises. When the output current reaches the maximum current, the converter will switch from MPPT mode to constant current mode. There will be a threshold time for the LVRT. The converter will confirm the grid fault and shut down if the voltage recovery time exceeds the threshold time. When the grid voltage recovers, the converter will return to the MPPT mode. Simulation and Experimental Results A 5 kV/50 kW PV dc-dc module based on SiC MOSFETs and SiC Diodes was developed with proposed dc-dc topology, as shown in Figure 18. Four modules are input parallel and output series connected to form ±10 kV/200 kW PV MVDC converter, as shown in Figure 19. The main parameters are shown in Table 3. The efficiency curves of the dc-dc module are shown in Figure 21. Efficiency curves under different input and output voltages are shown. In boost mode, as the input voltage increases, the efficiency increases. Because the current is larger in buck mode than boost mode, the efficiency is lower in buck mode. The maximum efficiency is 98.9%. Figure 22 is the voltage gain range of the dc-dc module and converter. The output voltage curve in Figure 22a conforms to the derivation result in Section 2.3. The voltage gain of the dc-dc module is between 6 and 11. It is between 24 and 44 for the converter in normal to adapt to the changing PV voltage. In the process of LVRT, the converter operates in buck mode and the voltage gain is less than 8 or even close to 0. The blue shade is the voltage gain range of the converter, as shown in Figure 22b. A wide range of voltage gain is realized in the converter. Table 4 shows the comparison between DAB, isolated boost and proposed topology. As the turn ratio and input voltage are the same, DAB has the lowest output voltage. Isolated boost and proposed topology have the same boosting ability. However, the output voltage of isolated boost cannot be lower than 2.3 kV in Table 4. The proposed topology has wider output voltage. 23 and 24 are the simulation results of inserting active damping by the proposed method. By only applying the PI regulator, the amplitude of the oscillation is more than 20% of the average output current. By inserting active damping, the ripple of the current is reduced to 0.8%. The waveform is stable and oscillation is effectively eliminated. Figure 26 is the experimental result of inserting active damping by the proposed method. It can be seen from the experimental results that the proposed method effectively suppresses oscillation. Table 5 is the comparison between PI and the proposed method. The current ripple is reduced from 30% to 6%. In the process of LVRT, the grid voltage drops from 5 kV to 1 kV, the controller adjusts the duty ratio to keep the output current on 10 A. The input voltage increases from 768 V to 795 V because the output power is reduced, as shown in Figure 27a. In the process of zero voltage ride through, as shown in Figure 27b, the grid voltage drops to 0 V, the output current is still 10 A by adjusting the duty ratio. The performance of the converter under LVRT condition is presented in Figure 28. In this experiment, the output of the converter is connected to a 7 kV dc grid. The input is connected to a programmable dc power supply with PV curve. As the grid voltage drops from 7 kV to 2 kV within 150 ms, the current closed loop has a rapid regulation to keep on 2 A during the process of LVRT. The output power is reduced from 14 kW to 4 kW. The input voltage is increased from 350 V to 450 V. As the grid voltage drops from 7 kV to 0 V within 200 ms in Figure 29, the converter regulates the duty cycle to keep the current at 4 A in the whole process. The input voltage is increased from 500 V to 700 V. The input current is reduced from 50 A to 2 A. When the grid voltage recovers, the converter shifts from buck mode to boost mode. There is almost no current spike during this process. Conclusions This paper presented a PV MVDC grid-connected converter with LVRT capability. A novel isolated buck-boost topology and its modulation method was proposed as the module of the converter. The control system was simplified by only controlling the active semiconductors on the low-voltage side. The output voltage is dramatically increased by applying only diodes on the high-voltage side. The medium-voltage dc-dc module was developed based on this topology to increase the power density of the converter. A LVRT method based on the LC filter for MVDC converter was proposed. Compared with the schemes that require additional circuits to suppress overcurrent during LVRT, this scheme is a cost-effective solution. In order to solve the unstability problem caused by the resonance spike of the LC filter, an active damping control algorithm based on feedback output capacitor current was proposed. Without additional loss, the active damping method can make the control system stable. The complete control strategy is presented and the converter can operate on both normal and LVRT modes. The 5 kV/50 kW SiCbased dc-dc modules and ±10 kV/200 kW PV MVDC converters have been developed. Experiments of the converter for the MVDC system in the normal and LVRT conditions are presented. High-efficiency and satisfied performance of the converter are achieved.
v3-fos-license
2019-11-07T14:10:52.348Z
2019-11-05T00:00:00.000
207899979
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0224687&type=printable", "pdf_hash": "620112bd13384362caaf87ba75a85d2c480a42f0", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41774", "s2fieldsofstudy": [ "Medicine" ], "sha1": "674df09937b36562e3ece0d38bff54c76c4b1d5e", "year": 2019 }
pes2o/s2orc
Evaluating the cross-cultural validity of the Dutch version of the Social Exclusion Index for Health Surveys (SEI-HS): A mixed methods study Background The recently developed Social Exclusion Index for Health Surveys (SEI-HS) revealed particularly strong social exclusion in non-Western immigrant groups compared to the native Dutch population. To qualify such results, cross-cultural validation of the SEI-HS in non-Western immigrant groups is called for. Methods A sequential explanatory mixed methods design was used, employing quantitative data from the Netherlands Public Health Monitor along with qualitative interviews. Data from 1,803 adults aged 19 years or older of Surinamese, 1,009 of Moroccan and 1,164 of Turkish background and 19,318 native Dutch living in the four largest cities in the Netherlands were used to test the factorial structure of the SEI-HS and differential item functioning across immigrant groups. Additionally, 52 respondents with a high score on the SEI-HS and from different background were interviewed on the item content of the SEI-HS and subjective feelings of exclusion. For each SEI-HS item the semantic, conceptual and contextual connotations were coded and compared between the immigrant groups and native Dutch. Results High levels of social exclusion were found in 20.0% of the urban population of Surinamese origin, 20.9% of Moroccan, 28.7% of Turkish and 4.2% of native Dutch origin. The 4-factor structure of the SEI-HS was confirmed in all three immigrant groups. None of the items demonstrated substantial differential item functioning in relation to immigration background. The interviews uncovered some methodological shortcomings, but these did not substantially impact the observed excess of social exclusion in immigrant groups. Conclusions The present study provides evidence in support of the validity of the SEI-HS in adults of Surinamese, Moroccan and Turkish background and confirms the major social exclusion of these immigrant groups in the main cities in the Netherlands. Policy measures to enhance social inclusion and reduce exclusion are urgently needed. Introduction Particularly high levels of SE were observed in adults of non-western background measured with the SEI-HS in 2012. One in five adults (21.0%) of non-western background was classified as moderate to strong SE, while the prevalence rates in adults of native Dutch and western migration background were 2.7% and 6.5% respectively [28]. Differences in SE might be expected given that risk factors for SE, such as low educational level, low income, low labour market position, linguistic problems and poor health [20], tend to occur more frequently in non-western immigrant groups than in native Dutch and western immigrant groups [29,30]. The magnitude of the differences was so large, however, that suspicion has been raised on a potential cultural bias of the SEI-HS. The leading question for the present study was whether the strong SE among adults of Surinamese, Moroccan and Turkish background compared with native Dutch citizens in the four largest cities in the Netherlands, can be explained by shortcomings in the cross-cultural validity of the SEI-HS. To answer the research question, a mixed methods approach was chosen. In addition to quantitative testing of the cross-cultural validity through confirmatory factor analysis and differential item functioning (DIF) analysis [31], qualitative interviews were conducted with socially excluded respondents of immigrant background and native Dutch origin. Qualitative data contribute insight into the individual experience of socially excluded people and can be used to explore whether items sufficiently represent the same content across cultures [32]. Mixed methods design The present study has a sequential explanatory mixed methods design consisting of a dominant quantitative and a less dominant qualitative phase [33,34]. Fig 1 shows the sequence, priority and integration of the two phases. In phase I survey data were collected on SE in the general population. In phase II, data from phase I were used to select a sample of socially excluded persons of Surinamese, Moroccan, Turkish and native Dutch origin. Semi-structured interviews were conducted on the perspective of the respondents on their situation and responses on the SEI-HS. The Medical Ethics Review Committee of the AMC confirmed that under Dutch law, medical ethics approval was not required for phase I (AMC, W12_146 no. 12.17.0163) nor for phase II (AMC, W13_311 # 14.17.0007) as participants were not subjected to any intervention or treatment. I Quantitative phase Data collection. The quantitative data were collected by the Public Health Services of the four largest cities in the Netherlands, as part of the Public Health Monitor (PHM) 2012. The PHM is a nationwide self-report survey of non-institutionalised adults aged 19 years or older, conducted every four years. To ensure that elderly and people living in neighbourhoods with a low socioeconomic status were well represented, stratified samples were drawn by Statistics Netherlands, based on age and neighbourhood. In total 71,627 residents of Amsterdam, Rotterdam, The Hague and Utrecht were invited to participate. Non-responders received two written reminders, and in the case of non-western immigrants, an extra telephone call or home visit. Questionnaires in Turkish, Moroccan Arabic and English translation could be used and trained interviewers were available to assist respondents face-to-face or by phone in their preferred language (Dutch, Arabic, Berber, Turkish or English). The response rate was 40% (Surinamese 28%, Moroccan 26%, Turkish 26%, Dutch 48%). Statistics Netherlands enriched the PHM data with information on zip code, migration background and standardised household income [35]. Participation in the research was anonymous and voluntary. In accordance with the Dutch Law, participants were informed by letter that by completing the questionnaire they consent with anonymous use of data for research. Measurement. Social exclusion. The SEI-HS consists of 17 items which measure four dimensions and an overall index of SE [24]. The four dimensions are: 1) lack of social participation, 2) material deprivation, 3) inadequate access to basic social rights and 4) lack of normative integration. Scores on the index and the four dimensions are categorised into 'little or no', 'some' and 'moderate to strong' exclusion. The SEI-HS was validated in the general Dutch population. The items were derived from various validated questionnaires such as the Loneliness scale of De Jong Gierveld [36], the SCP Social Exclusion Index [20] and Social Cohesion and Trust [22]. The internal consistency, internal structure, construct validity and generalisability were found satisfactory [24]. Migration background. In line with the Dutch standard definition, country of birth or, in case of second-generation immigrants, country of birth of the mother and/or father, as registered in the municipal population registers, were used to define migration background. Quantitative data analysis. Descriptive statistics. Analyses were restricted to respondents of Surinamese, Moroccan and Turkish origin with native Dutch respondents as the reference group. In order to control for the stratified sampling design and selective non-response, we used SPSS Version 22 Complex Samples Likelihood tests for the descriptive analyses of the prevalence of SE. Sampling weights were calculated by Statistics Netherlands based on a linear model with 9 sociodemographic variables and their interaction terms [37]. The significance level α was set at 0.001 to reflect the large sample size. Structural validity. To test whether the SEI-HS factor structure holds across the three migrant groups, we conducted confirmatory factor analyses in data subsets per migrant group using SPSS Amos 22.0. Five of the standard goodness-of-fit statistics given in Amos were used to assess model fit i.e. root mean square error of approximation (RMSEA), upper bound of 90% confidence interval (HI90), Tucker-Lewis index (TLI), comparative fit index (CFI) and Hoelter's .05 Index [38]. The Chi square statistic was not considered given its sensitivity to large sample sizes. The model fit was considered good if RMSEA< 0.05, HI90) < 0.06, TLI � 0.95, CFI > 0.90 and Hoelter's .05 Index � 200 [38]. These same criteria were used in the development of the SEI-HS [24]. Differential item functioning (DIF). DIF occurs when one group of individuals responds differently from another group on a given questionnaire item, even though both groups are equivalent on the underlying construct that is assessed, or in DIF terminology, if both groups show the same ability on the matching variable. In this study the categories 'little or no', 'some' and 'moderate to strong' of the relevant dimension scale were used as the ability levels. The cut-off points for these categories were based on the 85 th and 95 th percentile in the Dutch adult population of 19 years or older in 2012 [24]. For each immigrant group, three hierarchical models were calculated with SPSS ordinal logistic regression, with Y being the SEI-HS item tested, M the matching variable (i.e. the corresponding SE dimension) and G the grouping variable (i.e. Surinamese, Moroccan or Turkish versus Dutch): An item was considered to exhibit substantial DIF if the difference between model 1 and 3 in log-likelihoods was statistically significant (α = 0.001) and the change in R 2 at least moderate according to the Jodoin-Gierl effect size criteria by which ΔR2 < 0.035 is classified as negligible; 0.035 � ΔR2 � 0.070 as moderate and ΔR2>0.070 as large. [39][40][41]. In case of substantial DIF further analyses were made to characterize the type of DIF into uniform DIF (significant difference between model 1 and 2) and/or non-uniform DIF (significant difference between model 2 and 3). Criteria can be found in S1A-S1C Table. II Qualitative phase In the qualitative part of the study we set out to describe, analyse and compare the experiences of social exclusion and the responses on the SEI-HS in the four research groups. We followed the consolidated criteria for reporting qualitative research (COREQ) checklist [42]. Participant selection. The sampling frame consisted of the respondents of Surinamese, Moroccan, Turkish and Dutch background, with a high score on the SEI-HS who had given the Public Health Services written consent to re-contact (Table 1). Respondents from the city of Rotterdam could not be included as permission had not been requested. To reflect the variability in gender, age and neighbourhood across the four research groups, in total 50 cases were selected at random from the different strata. In case of non-response a replacement was selected as similar as possible to the original case. Data collection. Interviews took place between March and September 2014. During this period 177 respondents were contacted by letter, telephone and home visits. Up to three attempts were made to get in touch. The response rates are shown in Table 1, with no contact being the main reason for non-response (not at home or moved house). Interviews took place at a time and location convenient to the respondent, generally at their home address. Signed informed consent was obtained at the time of the interview. Each respondent received a 20 euro gift card as compensation for their time. The interviews were conducted by two experienced members of the research team (CB, AvL), of Dutch and Indonesian background respectively, and students of Surinamese, Moroccan and Turkish background. Students were trained by members of the research team and closely supervised in their work. The supervision not only focused on methodological aspects but also on emotional wellbeing and safety of the students. To explore the perceptions of the respondents, a semi structured topic guide was used which comprised open-ended questions accompanied by probes and prompts to expand, clarify and understand responses. The 17 items of the SEI-HS were asked exactly as worded, but further explanation was given if the respondent asked for it. Other topics included health and health behaviour, feelings of being left out of society, locus of control and expectations for the future. To create a pleasant and personal atmosphere, respondents were invited, at the start of the interview, to tell something about themselves and the things they enjoy doing. Interviews lasted 20-90 minutes (53 minutes on average), depending on the willingness and ability of the respondents. Interviews were audio-recorded and transcribed verbatim by independent transcriptionists. Qualitative data analyses. The transcribed interviews were entered in MaxQDA and analysed by two research team members (BC, AvB) using thematic coding techniques. The initial coding framework was based on the structure of the topic guide. Subsequently, for each SEI-HS item text references were analysed on semantic, conceptual and contextual evidence and categorised [32]. Semantic evidence included all text references referring to the meaning of the language used and the comprehensibility of the item. The text references were coded '0' if respondents correctly understood the wording of the item, '1' if that was not the case and 'x' if there was no conclusive evidence. Conceptual evidence included all text references referring to the general idea or notion captured by the item. The conceptual connotations were compared with the intended concept of the item and coded as either equivalent (0), deviating (1) or inconclusive (x). Contextual evidence included all text referring to the contextual specificity of items. This specificity only becomes apparent through between-group comparison [32].The text references were coded per respondent as: '0' if no culturally specific context was mentioned or appeared to play a role in the respondents answer, '1' if culturally specific context was mentioned and 'x' if there was no conclusive evidence. Scores were calculated for each research group and each type of evidence. If 30% or more of the responses was problematic i.e. coded '1', we categorised this as 'yes, there may be a reason for concern'; if 10-30% was problematic, we categorised this as 'perhaps, there is a reason for concern; and 0-10% was categorised as 'no reason for concern'. Cases with inconclusive evidence were excluded from the calculation. Finally, all responses coded 'yes, there may be a reason for concern' were compared between the groups and analysed for their potential effect on the cross-cultural validity. Reporting in this manuscript follows the STROBE guidelines for cross-sectional studies [43]. I Quantitative phase Descriptive statistics. Background characteristics. Table 2 shows that the Dutch respondents of phase 1 are generally older than the three immigrant groups and live less often in neighbourhoods with a low socioeconomic status (SES). Social exclusion. The data presented in Table 3 confirm that in the four cities SE is more prevalent in adults of Surinamese, Moroccan and Turkish origin compared to native Dutch adults. High levels of SE were found in 20.0% of the urban population of Surinamese origin, 20.9% of the Moroccan, 28.7% of Turkish and 4.2% of native Dutch origin. Elevated levels were also found on the underlying dimension scales. Especially material deprivation was increased in all three immigrant groups by a factor of 6 to 7. Inadequate access to basic social rights was highest in adults of Moroccan origin. Only in Turkish adults, the prevalence of 'Lack of normative integration' was not increased compared to adults of native Dutch origin (p = 0.023). Confirmatory factor analyses. The results showed an acceptable model fit for the three immigrant groups (Table 4). In all cases the Hoelter's .05 Index indicated good model fit. Factor loadings were all significant at the 0.001 level except for item 17 'Work is just a way of earning money' ( Table 4). The factor loadings of this item were not significant in the Moroccan and Turkish groups. The RMSEA, CFI and TLI coefficients were comparable to the fit of the original SEI-HS model. Differential item functioning. Of the 17 items examined, none displayed substantial DIF i.e. p < 0.001 and ΔR 2 0.035 or higher (S1A-S1C Table). II Qualitative phase In total 52 interviews were conducted, with respectively 11 Surinamese, 9 Moroccan, 10 Turkish and 22 Dutch persons. Four in five were interviewed by an interviewer of the same migration background (81%). Characteristics of respondents are presented in Table 2. For each SEI-HS item the semantic, conceptual and contextual connotations reported by the respondents were coded and compared between the four research groups. As can be seen from Table 5 the items of dimension 4 caused most reason for concern. Semantic problems were identified for all groups (including native Dutch respondents) in item 17. The item was misunderstood by more than a third of the respondents (12 out of 33). Instead of 'working is just a way of earning money' most of them understood the item as 'working is an unjust way of earning money'. Coincidentally, a negative answer indicates in both cases normative integration and a positive answer the lack thereof. Semantic problems with item 15 (I sometimes do something for my neighbours) concerned primarily Moroccan respondents. Items 14, 15 and 17 of dimension 4 showed conceptual problems in all four groups. Item 14 measured in almost half of the respondents (15 out of 32) lack of money instead of noncompliance to the core values of Dutch society: "I have a few charities that are my favourites, they really need it. But my finances are at a pretty low ebb at the moment." Item 15 measured in one third of the respondents (18 out of 37) lack of opportunity to do something for your neighbours (e.g. in case of conflict or no contact with neighbours) and/or inability to help (e.g. due to old age or ill health). Item 17 measured in one fifth of the respondents (7 out of 35) work ethic instead of noncompliance to core values. These respondents found work a good way to earn money: "If you don't work, you won't eat". Contextuality played a role in item 14. One Moroccan and one Turkish respondent mentioned payment to the mosque. This works both ways: "If they come from the mosque, I pretend I don't hear anything, they think 2 or 3 euros is too little." One Moroccan respondent paid medical costs for poor family members in the home country. The items of dimension 2 and 3, 'Material deprivation' and 'Access to basic social rights', gave less reason for concern. A number of respondents had difficulty in understanding the wording of the items 8 and 12. Three Surinamese respondents (3 out of 7) did not answer item 8 if they have enough money to heat the house properly, but whether the house can be heated well: "I hope so, I have not experienced the winter here yet". Five Moroccan respondents (5 out of 9) were not able to translate their (dis)satisfaction with their home (item 12) into a corresponding grade. Our analysis did not suggest any conceptual problems: all respondents interpreted the items of dimension 2 and 3 as intended. Contextuality only played a role in item 10. Having enough money to visit others did not only depend on the financial situation of the household but also on the travel costs incurred. Family of immigrants generally live further away, making travel costs more difficult to pay. The items of dimension 1 also functioned much as expected, with some exceptions. Item 1 was not understood by a quarter of the respondents (6 out of 24), both immigrants and one native Dutch respondent: "Emptiness? What do you mean by that?". Item 5 showed comparatively the most validity problems. Six respondents, both immigrants (3 out of 17) and native Dutch (3 out of 18), reported that they felt rejected by their employer or by institutions like the Cross-cultural validity of the Dutch version of the Social Exclusion Index for Health Surveys (SEI-HS) Legend: no = no reason for concern i.e. 0-10% of the respondents did not understand the wording or formulation (semantic evidence), reported a different connotation than intended (conceptual evidence) or mentioned culturally specific context (contextuel evidence). x = insufficient information (less than 5 observations); perhaps = perhaps, there is some reason for concern: 10-30% of the respondents met the above criterion; and yes = yes, there may be a reason for concern: > = 30% met the criterion. Cell colour: yellow = potential threat to the cross-cultural valdity; green = no threat tot the cross-cultural validity; blue = general validity issue. � The Dutch version of the SEI-HS can be found in S1 Appendix. 1 Loneliness scale De Jong & Gierveld [23]. tax office or the Employee Insurance Agency. Conceptually this interpretation belongs more to dimension 3 'Access to institutions' than to 'Social Participation'. In four cases the events or cases referred to were specific to the cultural group, for example forced marriage in case of a Turkish respondent. Contextuality also plays a role in item 6. The degree of contact that Moroccan, Turkish and Dutch respondents have with their neighbours is influenced by the migration background of these neighbours. According to a Turkish respondent they just say "hi" to the Dutch neighbours, but visit their Turkish neighbours regularly at home. The concept that is being measured, however, does not differ between the groups. Discussion Our objective was to examine possible shortcomings in the cross-cultural validity of the SEI-HS that might explain the high prevalence of SE in adult immigrant groups found in the 2012 health monitor. The study was conducted among adults of Surinamese, Moroccan, Turkish and Dutch origin in the four largest cities in the Netherlands. The quantitative part of the study showed no cross-cultural validity issues. CFA confirmed the 4-factor structure of the SEI-HS in the three immigrant groups and none of the SEI-HS items exhibited problems with differential item functioning. Item scores did not differ significantly between respondents of Surinamese, Moroccan, Turkish origin and native Dutch respondents at the same level of SE. The qualitative part uncovered little differences in understanding and interpretation of items between the population groups, but some general methodological shortcomings were identified, especially in the normative integration dimension of the SEI-HS. The socially excluded respondents we interviewed did not always interpret the items as intended, due to unfamiliarity with words, complicated sentence structures and different connotations. Potential cultural biases were limited to the semantics of items 8,12 and 15 and contextuality of items 5 and 10. The interviews showed that particularly Moroccan respondents had problems understanding certain items. Rewording or rephrasing of semantically difficult items could be considered. In general, these findings underline the importance of offering assistance to respondents face-to-face or by phone in their own language (Berber or Arabic). Items 5 (I often feel rejected) and 10 (I have enough money to visit others) showed contextual differences that might threaten the cultural validity of the items. This was however not reflected in the quantitative analyses. Most validity issues were as noteworthy in native Dutch respondents as in Surinamese, Moroccan and Turkish respondents. This was not expected since all SEI-HS items originate from widely used and/or validated questionnaires [20][21][22][23]. The content of items 8-10 and 13-17 was derived from literature and interviews, judged by four focus groups and tested through individual cognitive interviews [20]. Efforts were made to include people with a higher risk of SE i.e. with low income and low educational level. The content of items 1-5 was derived from literature, life histories and interviews and judged by researchers and students [44]. Item 11 stems from a validated scale [45] that was translated into Dutch with back translation into English [22]. As far as we could establish, these items were not pre-tested among persons from disadvantaged social groups and/or low education or income. Despite the fact that the Normative Integration items were pretested with low-income and low-education participants, several issues with semantic and conceptual validity were encountered. The concept of normative integration touches on the moral underclass discourse, one of three models of social exclusion identified by Levitas [46]. The discourse focuses on the behavioural and attitudinal characteristics of the excluded and their imputed deficiencies. The Normative Integration scale developed by the SCP [20] reflects a fairly narrow spectrum of behaviours and attitudes that are relatively common in the general Dutch population. Our study showed that high scores on lack of normative integration do not necessarily reflect a lack of social commitment or anomie, but may reflect an inability to comply. For example, not helping your neighbours because you are handicapped yourself or not donating to good causes because you are in serious debt. One could argue that concept and social group are coming together here and that the failure to comply with given norms and values is part and parcel of the exclusion itself. From this point of view, the validity of the Normative Integration scale need not be jeopardised. High scores on the Normative Integration scale reflect high social exclusion, even though the interpretation of the concept and context may differ between respondents. Further research in the non-excluded group could shed more light on this issue. A strong point of our study is the use of a sequential explanatory mixed methods design for validation purposes. This approach is not very common. Usually, qualitative research precedes quantitative validation and not vice versa [47]. Although uncommon, the approach has been used before. For example, Morren et al. [48] interviewed respondents with deviant response style behaviour and Carlier et al. [49] approached groups with high levels of non-response. In our case, the design allowed us to address reliability and validity issues that were uncovered in the quantitative survey. It also allowed to confirm the ability of the social exclusion index to identify a diverse group of socially excluded persons including perpetrators of domestic violence, persons leading very isolated lives, victims of violent incidents such as armed robbery or rape, people with drug addiction or aggression disorder, and someone just released from detention. There are some limitations to our study. The first limitation is related to the low response rate of the PHM especially among non-western immigrant groups. Although the Public Health Services employed a large range of measures to increase participation of difficult to reach groups, a certain degree of selection bias e.g. for better integrated and educated immigrants, is inevitable. The great diversity within the qualitative research group gave us, however, confidence in the representativeness of the research outcomes. Another limitation is that the research was conducted only in urban areas. Lastly, we classified the persons in our research based on their country of birth and that of their parents. This classification does not necessarily define their individual identity or represent meaningful social categories [50]. Gender, age, occupation, ethnic identity and educational level, may be more relevant in certain contexts than migration background. As more detailed knowledge becomes available, it becomes more difficult to make statements about immigrant groups in general [51]. Conclusions The results of this study support the cross-cultural validity of the SEI-HS in three major nonwestern immigrant groups in the Netherlands. The findings suggest that the large differences in SE found between native Dutch and non-western immigrant groups are real and not due to measurement bias. This raises serious concerns about the social inclusion of non-western immigrants in the four largest cities in the Netherlands and its potential effect on health and wellbeing. Policy measures to reduce SE are urgently needed as well as more research into the mechanisms and risk factors of SE among immigrant groups and pathways to more social inclusion. Further research is necessary to examine the content validity of the normative integration dimension of the SEI-HS and rephrasing semantically problematic items. The interviews showed that the lived experience of socially excluded people may differ from the majority population. In general, it is advisable to involve people in adverse social circumstances in the development of health related measures. Supporting information S1 Table. (A-C) Differential item functioning in SEI-HS items with respect to migrant background, A: Surinamese, B: Moroccan and C Turkish versus native Dutch. (PDF) S2 Table. Factor loadings items SEI-HS in adults of Surinamese, Moroccan and Turkish origin compared to the reference values in the general Dutch population. (PDF) S1 Appendix. Dutch version of the SEI-HS. (PDF)
v3-fos-license
2023-05-04T15:13:12.542Z
2023-01-01T00:00:00.000
258467441
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.clinsp.2023.100207", "pdf_hash": "cdddb90f59060b96d3df6c6279f2be671c819adb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41776", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cd89cb1a5313489fccf81dd6966fa3298a8135a2", "year": 2023 }
pes2o/s2orc
The effectiveness of ultrasound-guided core needle biopsy in detecting lymph node metastases in the axilla in patients with breast cancer: systematic review and meta-analysis Highlights • US-CNB is a reliable preoperative test for detecting axillary metastasis in breast cancer patients, with high accuracy and safety.• Compared to US-FNA, US-CNB shows even higher accuracy for detecting axillary metastasis in breast cancer patients.• The use of US-CNB as a preoperative test can reduce the need for a second operation and improve patient outcomes. Introduction Breast Cancer (BC) is now one of the most common malignancies in women worldwide. It has a high morbidity and mortality rate and is one of the leading causes of death in women, posing a major threat and challenge to women's health worldwide. 1 Preoperative assessment of Axillary Lymph Nodes (ALNs) in BC patients allows early and accurate staging of patients and plays an important role in the subsequent treatment and prognostic assessment and can be used to make relevant decisions about adjuvant therapy such as radiation treatment, Neoadjuvant Chemotherapy (NAC) treatment, and breast reconstruction. According to the results of the American College of Surgeons Oncology Group (ACOSOG) Z0011 trial, patients with breast-conserving early-stage BC with tumors less than 5 cm can safely avoid Axillary Lymph Node Dissection (ALND) even if SLN1-2 metastases are treated with subsequent breast radiotherapy and systemic therapy. 2 The release of the Z0011 trial, which changed the traditional changed the traditional treatment approach of the past and set off an intense debate and clinical practice. 3 Among them, ALND impacts patients' overall postoperative quality of life and can be associated with significant surgical complications (pain, upper extremity mobility impairment, edema, and so on). 4 Ultrasound (US) is often an important tool in the preoperative diagnosis of morphological features of ALNs and is key to understanding the progression of BC. With the increasing availability of ultrasound, the accuracy of preoperative diagnosis in BC patients is improving. Ultrasound-guided Fine Needle Aspiration (US-FNA) and ultrasoundguided core needle biopsy (US-CNB) use ultrasound imaging to select the optimal puncture biopsy route, effectively improving the accuracy of diagnosis. In recent years, the use of US-FNA for preoperative evaluation of ALNs has increased due to its low risk, ease of use, low cost, and low complication rate. 5 Recently, it has been suggested that US-CNB may be superior to US-FNA in terms of diagnostic accuracy, providing a more accurate preoperative assessment of the status of ALNs and potentially replacing US-FNA. 6 US-CNB is a technique in which ALNs tissue is removed from an abnormal area of the axilla, usually by an operator using a large core needle from an ultrasonically explored area. 7 Besides having good accuracy, the US-CNB offers more extra samples for tumor classification and immunohistochemistry. 8 But it is more expensive and more invasive, and there may be a possible danger of malignant seeding along the puncture path. The diagnostic value of the US-CNB has been questioned due to the lack of consistency in the various reported results. There have been previous studies on the accuracy of US-CNB for the axilla, but fewer studies have been included. Based on the above, the authors present an updated review and meta-analysis. Thus, the authors have mainly comprehensively assessed the diagnostic performance of US-CNB in assessing metastases of ALNs in BC to provide a basis for accurate preoperative assessment. Methods The review complies with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 9 And the review complied with the code of the "Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy". 10 The protocol was registered and accepted by PROSPERO under CRD42022369491. Literature search The authors searched the electronic databases PubMed, Scopus, Embase, and Web of Science for clinical trials about US-CNB for the detection of ALNs in breast cancer patients. The following search terms were used: ((sonography OR sonography guided) OR (Ultrasound guided OR US-guided) OR (Ultrasound OR US)) AND (core-needle biopsy OR core needle biopsy OR CNB) AND (ALN OR axillary lymph nodes OR axillary lymphadenopathy OR axillary staging) AND (pre-operative staging OR preoperative OR preoperative period OR preoperative). The start date of the search was not restricted. Our last search was updated on 1 September 2022. Among them, search results were combined and output to the EndNote bibliographic management tool, and repeated results were deleted. 11 Next, full-text screening was performed to include eligible ones in the meta-analysis. The authors conducted a manual search of the literature included in the study. Eligibility screening of the literature included was carried out independently by two trained researchers. Any disagreements were resolved by consensus or by consultation with a third senior researcher. In addition, the screening process of the study was summarized by the PRISMA flowchart. Inclusion and exclusion criteria Two researchers independently reviewed the titles, abstracts, and full text of the original articles. Two researchers screened eligible articles for inclusion and included those that met the following criteria: 1) BC Patients with no preoperative clinical signs of ALNs metastasis; 2) Included studies should have enough data; 3) Published case-controlled study of the accuracy of the US-CNB correlation; 4) Retrospective and prospective studies. Next, the authors further reviewed the articles and excluded those that met the exclusion criteria, and the excluded articles comply with the below criteria: 1) Articles that have been repeatedly published; 2) Literature with unavailable full papers or insufficient data; 3) Abstracts, conference articles, case reports, dissertations, and so on; 4) Animal laboratory experiments; 5) Non-English language articles and; 6) Pregnancy studies. Data extraction All study data in this review were collected independently by two researchers. The following data were collected: the first author, study type, number of patients, country, mean age of patients, needle diameter, number of tTrue Positives (TP), number of False Positives (FP), number of False Negatives (FN), and number of True Negatives (TN). In the process of original data collection, some studies performed US-CNB and US-FNA at the same time, and the authors collected both together for subsequent comparative analysis. Quality assessment The quality of the included literature was independently assessed using the Cochrane Handbook diagnostic study quality assessment tool QUADAS-2, which was used for this diagnostic accuracy meta-analysis to determine the level of risk bias and applicability for each of these parameters. 12 It consists of four key areas (patient selection, index test, reference standard, and flow and time) to help assess the risk of bias. There are three levels of risk of bias based on the likelihood of bias occurring: high risk, unclear risk, and low risk. The quality assessment is carried out by two researchers after reading the full text, and if they cannot reach a consensus, a third senior researcher should be consulted to resolve the issue. Statistical analysis Statistical analysis of the data collected and adherence to the Cochrane Diagnostic Test Accuracy (DTA) review guidelines. 10 Metaanalysis of the included literature data was carried out using Meta-DiSc1.4 software and Review Manager 5.3 software, and heterogeneity was evaluated by I 2 statistics. During the statistical analysis of diagnostic studies, it is inevitable to encounter heterogeneity, which may be caused by threshold effects or non-threshold effects. Heterogeneity due to threshold effects was tested using the Spearman correlation coefficient. If p > 0.1, it indicates that the threshold effect did not cause heterogeneity; if p≤0.1, it indicates that the threshold effect caused heterogeneity. Heterogeneity was evaluated using the I 2 statistics, with scores from 25%−49% classified as low, 50%−74% as moderate, and more than 75% as high heterogeneity. 13 Meanwhile, the authors plotted the Summary Receiver Operating Characteristic(SROC) curve of US-CNB to illustrate the association for sensitivity and specificity, where the value of AUC is 0.5 and 0.7 means poor precision, > 0.7 and 0.9 means middle precision, > 0.9 and 1.0 means great precision. Meta-analysis results are mainly presented in the form of forest plots, and funnel plots are used to analyze for publication bias. Search results The authors tentatively identified 1530 studies in the literature search, and after excluding duplicates, a total of 1228 studies remained. 343 studies remained after the exclusion of reviews, reports, system reviews, and meta-analyses. 343 articles were assessed from the full text and 17 studies were finally included for meta-analysis. Details of the process used to select the included studies are shown in Fig. 1. Characteristics of included studies Of the 17 studies, 7 were retrospective and 10 were prospective. Of these, eight studies compared US-CNB with US-FNA. Regionally, there were five studies in Asia, seven studies in Europe, and five studies in the United States. The needle diameters used in US-CNB varied between 14-and 22-gauge, with eight studies using 14-gauge needles and others using 16-, 18-, 20-, 21-, and 22-gauge needles or not reporting. Needle diameters used in US-FNA varied between 21 and 25 gauges, with six studies using 21-gauge and 25-gauge needles. Of these, most patients had invasive tumors (Table 1). Quality and risk of bias Because a portion of the study selection contained neoadjuvant patients and a portion of patients was not formally randomized to the US-FNA or US-CNB groups for the trial, there is some risk of uncertainty in the study, resulting in approximately half of the uncertainty in patient selection, flow, and timing. In index text and flow and timing, most studies used standardized postoperative pathology findings as a criterion, showing a low risk of bias. Figs. 2 and 3 show the overall risk of bias in the results for every study and the pooled results, and the overall quality is quite good. The 17 circles in the US-CNB publication bias funnel plot indicate the 17 studies in the review, and the middle line indicates the total sum of DOR. The distribution of the 17 circles showed a great deal of symmetry, indicating no significant publication bias (Fig. 4). Diagnostic accuracy of US-CNB for diagnosing ALNs The authors plotted the forest plot of the specificity and sensitivity of US-CNB in the diagnosis of ALNs metastasis (Fig. 5). The overall sensitivity of the US-CNB for the detection of ALNs metastases from BC was 0.90 (95% CI [confidence interval] 0.87-0.91; p = 0.00). Pooled studies were heterogenous (I 2 = 57.30%). The overall specificity of the US-CNB for the detection of ALNs metastases from BC was 0.99 (95% CI 0.98-1.00; p = 0.62). Pooled studies were homogenous (I 2 = 0.00%). To assess the diagnostic value and predictive accuracy of the US-CNB, it was fully evaluated using SROC, which had an Area Under the Curve (AUC) of 0.9797 (Fig. 6). The closer the AUC is to 1.0, the higher the performance of the diagnosis. These supported a good ability of US-CNB to distinguish ALNs metastases from BC. Comparison of the accuracy between US-CNB and US-FNA Nine of the articles in our review include comparative studies of US-CNB and US-FNA (Table 2). Next, the authors compare the diagnostic value of US-CNB and US-FNA by summarizing these 9 articles. As shown in Fig. 7, in terms of overall sensitivity for the detection of ALNs metastases from BC, US-CNB was 0.88 (95% CI 0.84-0.91; p = 0.12) vs. US-FNA was 0.73 (95% CI 0.69-0.76; p = 0.91). The pooled studies of US-CNB were heterogeneous (I 2 = 38.40%) and of US-FNA were homogenous (I 2 = 0.00%). At the same time in Fig. 8, in terms of overall specificity for the detection of ALNs metastases from BC, US-CNB was 1.00 (95% CI 0.99-1.00; p = 1.00) vs. US-FNA was 1.00 (95% CI 0.99-1.00; p = 0.92). The pooled studies of US-CNB and US-FNA were both homogenous (I 2 = 0.00%). Then, the authors plotted the SROC curves for US- Fig. 9. By comparing their sensitivity, specificity, and SROC curves, the authors found that US-CNB was slightly better than US-FNA in detecting ALNs metastases from BC. Heterogeneity The I 2 value of the overall sensitivity of the US-CNB was > 50%, indicating a moderate degree of heterogeneity. Spearman correlation analysis was used to further investigate the source of heterogeneity. The results showed r = 0.080, p = 0.759, not statistically significant, and no correla-tion. The results suggest that the moderate heterogeneity in the overall sensitivity of the US-CNB is independent of the threshold effect. The heterogeneity may be due to the remaining causes. The authors then performed subgroup analyses to further explore the sources of heterogeneity. Subgroup analysis The use of preoperative NAC treatment before US-CNB testing may lead to the misclassification of tumors as negative, with implications for the accuracy of ALNs detection. 30 First of all, the authors divided the overall into two groups (preoperative treatment with NAC and preopera- those of no use of preoperative NAC treatment were homogenous (I 2 = 0.00%). Secondly, the authors decided to divide all the studies into three groups (Asia, Europe, and the United States) by region. The results show that the accuracy varied by region. As shown in Table 4, in three groups, sensitivity and specificity were better in Asia than in Europe and the USA, possibly due to the large number of patients studied in the Asian region. Usually, the larger the BC volume, the faster the proliferation and the longer the growth time of the tumor cells, as well as the higher the Table 5. Sensitivity and specificity increased with increasing clinical tumor size. Finally, the authors analyzed whether the number of punctures could have an impact on the diagnostic accuracy of US-CNB. The authors divided all studies into two groups (< 3 and > 3) by the number of punctures. The authors found that sensitivity increased with the number of punctures, but specificity did not improve, which may be related to insufficient sample size (Table 6). Overall, the use of preoperative NAC treatment, different regions, clinical tumor size, and the number of punctures may be factors affecting the heterogeneity. Postoperative complications The authors found that postoperative complications of US-CNB occurred in 12 of all studies. Of these, postoperative complications occurred in 65 of 2521 patients. In addition, the authors introduced data on postoperative complications of US-FNA for comparison with US-CNB. After comprehensive data analysis, the authors found that US-FNA was associated with fewer postoperative complications than US-CNB. The authors also found that US-FNA was lower than US-CNB in terms of patient pain perception. 28 Discussion The presence of ALNs metastases is an important indicator of poor prognosis in BC patients. Accurate preoperative staging of ALNs in BC allows early assessment of the patient's axillary status, development of treatment strategies, and assessment of prognosis for individualized patient treatment. 31 Two large randomized trials of ALNs, ACOSOG's-Z0011 2 and IBCSG-23-01 32 change conventional axillary therapy to reduce unnecessary ALND and improve patients' postoperative quality of life. In the future, axillary management may become more precise and personalized, and accurate preoperative staging of the axilla may reduce the need for additional surgery, further reducing the use of ALND. Imaging is often used for preoperative screening and assessment of patients. In general, the devices used to image the ALNs include nuclear medicine, magnetic resonance, and ultrasound, of which ultrasound is the most commonly used and can easily obtain high-quality images of the ALNs under non-invasive conditions. 33 With the development of ultrasound technology, the use of US puncture biopsy techniques has become more widespread, of which US-CNB has a low False Negative Rate (FNR) and has been shown to have excellent safety and accuracy. 20 By summarising all the studies, the authors found the following advantages of US-CNB in the preoperative detection of ALNs metastases in BC patients, including 1) No general anesthesia is required; 2) Save time and prevent a second procedure or unnecessary SLNB, precise axilla staging can be obtained at the first visit; 3) Better diagnostic performance than FNA; 4) Ability to run immunohistological tests on extracted samples; and 5) An effective and safe diagnostic tool. 34 In recent years, a growing body of literature has demonstrated that diagnostic results are more accurate when the two types of detection are used in combination. 35 Cserni and colleagues showed that combining the two assays for diagnostic purposes resulted in higher specificity and sensitivity, as well as fewer false negatives, thus improving the performance of the assay and providing maximum diagnostic accuracy for patients. 36 The combination of dye-isotope and US-CNB can help identify some lymph nodes that are not visible on ultrasound. 19 Therefore, US-CNB can be used in combination with other methods to improve diagnostic accuracy. The number of punctures of a biopsy may have an impact on the accuracy of the diagnosis. The study by Macaskill and colleagues showed that the diagnostic rate increased with the number of punctures, in their trial the rate was 81.8% (45/55) for the first puncture, 96.4% (53/55) for the second, and 100% (55/55) for the third. 37 In the above study, the diagnostic rate of the second puncture was already high. In addition, the continued use of puncture to obtain additional samples may improve the diagnostic rate of some patients with small ALNs involvement. 17 The most critical aspect of US-CNB operation is avoiding damage to adjacent structures, such as blood vessels or nerves, and the vast majority of anterior lymph nodes are located caudal to the axilla. For ALNs with a complex surrounding structure, a modified location and method of puncture can safely access a greater proportion of the ALNs and effectively reduce complications. 38 However, part of the lymph node is Table 4 Subgroup analysis based on a region for accuracy of US-CNB. 95% CI, 95% Confidence Intervals; US-CNB, Ultrasound-guided Core Needle Biopsy. located higher up in the armpit, close to the major vessels, and the use of US-CNB poses a safety risk for these patients. 27 In rare situations where the anterior axillary lymph nodes are located near large blood vessels or deep in the thoracic wall, the safer method instead of trying US-CNB would be the one to choose. Therefore, a good understanding of the structure of ALNs can increase the safety of using US-CNBs and reduce complications. The authors found that a small number of patients developed some post-operative complications, including pain, hematoma, bleeding, bruising, and pneumothorax. Of these, pain was the most common postoperative complication. To date, no serious postoperative complications have been reported in the literature. During the process of puncture, the tumor may undergo malignant seeding along the trajectory of the puncture, or minor lesions may occur around the tumor and result in incomplete resection of the tumor, which may lead to tumor recurrence. 39 In some cases, US-CNB may cause a hematoma or fibrosis after the puncture, making it difficult to accurately identify the location and shape of lymph nodes during surgery. 40 Different approaches, patient choice, and operator proficiency may influence the development of complications. 41 Continuous ultrasound monitoring, appropriate needle selection, improved operator proficiency, and appropriate patient selection can effectively prevent potential complications of US-CNB. 25 In terms of needle selection, it has been previously reported in the literature that minor complications occurred in only a small proportion of patients in previous studies using conventional automated needles for puncture. However, in the study by Abdsaleh and colleagues, a modified type of needle was used, and patients had no postoperative complications. 42 To date, it has been debated whether US-CNB is superior to US-FNA in detecting ALNs in BC patients. Among the studies the authors included, eight studies performed comparative analyses between US-CNB and US-FNA. The authors found that the sensitivity of US-CNB was better than that of US-FNA, but both were similar in terms of specificity. Previous studies have shown that the inadequate rate of US-FNA samples ranged from 0% to 54%. 43 The higher specificity of US-FNA in our study may be associated with the choice of trials with bigger tumors or the exclusion of patients with insufficient samples. 44 US-CNB uses a larger needle size than US-FNA for sample collection and may obtain more samples. For the identification of micro-metastases, the ability of US-CNB to identify micro-metastases in lymph nodes is superior to that of US-FNA, and US-CNB can measure the size of metastatic deposits to identify micro-metastases in ALNs. 21 the ability of US-CNB to visualize the structure of the lymph nodes can be of significant help in the diagnosis of pathology. 45 US-CNB shows ALNs structures that can be evaluated by anatomic pathologists, whereas US-FNA only shows cells that need to be evaluated by cytopathologists, and the experience of the pathologist may also influence the differences in the reported procedures. 46 In addition, samples obtained by US-CNB contain sufficient RNA and DNA to allow molecular experiments and immunohistochemistry to further assess the status of tumor biomarkers and provide a useful basis for clinical treatment planning. Therefore, US-CNB is superior to US-FNA in the diagnosis of ALNs in BC patients. A number of meta-analyses have previously compared US-CNB with US-FNA. A meta-analysis by Hous-Sami and colleagues found no significant difference between the two, and the lack of a large enough sample to include in the study made the study limited and the accuracy of the results limited. 47 Balasubramanian and colleagues showed that US-CNB is a superior diagnostic technique to US-FNA for ALNs metastasis of BC was detected, and the sensitivity of US-CNB was 0.88 (95% CI 0.84-0.91; p = 0.12) vs. 0.73 (95% CI 0.69-0.76; p = 0.91) for US-FNA, the specificity was similar and the AUC were 0.984 and 0.979. 48 In addition to that, Pyo and colleagues investigated the accuracy of the two methods according to the cytological preparation method and found that there was no significant difference between the different cytological preparation methods of US-FNA and the sensitivity of US-CNB was higher than that of US-FNA. 49 US-CNB may be lower than US-FNA in terms of FNR. 20 In addition, a recent study found that FNR may be strongly associated with the size of suspicious ALNs in BC patients. 50 In the majority of cases, false negatives occur may be because the ALNs cannot be palpated in the fatty tissue of the axilla or because of the inability to identify lymph nodes on ultrasound when they occur during inflammatory disease. As there are differences between the different pathological types of BC, there will be some false negatives due to different types. [51,52]. The authors found a moderate degree of heterogeneity in the overall sensitivity of the US-CNB. Firstly, the use of NAC treatment before surgery may alter the initial condition of the axilla, which may lead to false negative results and then affect the accuracy of US-CNB. Secondly, there are differences between regions and these differences can lead to the creation of heterogeneity. Thirdly, different sizes and grades of BC masses, with different probabilities of metastasis to ALNs, may have an impact on the accuracy of the US-CNB. As the BC masses grade and size increased, the risk of ipsilateral ALNs metastasis increased, as did the sensitivity of US-CNB. 26 Finally, it is also possible that the number of punctures may have an impact on the accuracy of the US-CNB, as sometimes micrometastases may be present in the ALNs of BC patients who require multiple punctures to be diagnosed. The authors strictly followed the systematic review method during the research process, the included literature was of high quality, and a large number of clinical samples were included, which gives our results good credibility. However, there are several limitations to our meta-analysis that need to be considered. First of all, there was a moderate degree of heterogeneity in the overall sensitivity of US-CNB in our analyses, and the authors only performed subgroup analyses for four of these factors; the remaining factors were not included. Other factors may also influence the accuracy of the US-CNB, such as the ethnicity of the patient, the timing of the US-CNB assessment, and the degree of illness. Secondly, there is insufficient information in some of the literature to obtain sufficient information, which may lead to some bias in the results. Thirdly, the different equipment used in each study and the different skills of the operators made it impossible to achieve a uniform standard, which may have affected the credibility of the results to some extent. However, the authors did not analyze these factors and could not eliminate their influence on the results. Fourthly, some of the literature the authors included was from small sample studies, which may have influenced the results. Finally, the authors only selected articles published in English and did not select articles published in other languages, which biased our literature search. Although our meta-analysis has many limitations, the results of our study can provide great help in the preoperative evaluation of ALNs metastasis in BC patients. The preoperative use of US-CNB can help patients significantly reduce the time and cost of diagnosis and reduce the pain of secondary surgery. However, due to the lack of data, more large studies are needed to further confirm US-CNB as a screening criterion and diagnostic tool for ALNs metastasis in BC patients. Conclusions In conclusion, US-CNB has good specificity, sensitivity, and accuracy in the diagnosis of ALNs metastases in BC patients and can be used as a preoperative detection method to facilitate early axillary staging and enable the personalized treatment plan for patients. Authors' contributions Feng L had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: Feng L, Qi X, Jiale W. Acquisition, analysis, or interpretation of data: Feng L, Qi X, Jiale W, Jing W. Drafting of the manuscript: Feng L, Qi X, Jiale W. Critical revision of the manuscript for important intellectualcontent: Feng L . Statistical analysis: Qian Y, Jing W, Runzhao G. Administrative, technical, or material support: Qian Y, Jing W, Runzhao G. Supervision: Jing W. All authors have read and approved the manuscript.
v3-fos-license
2024-06-30T15:05:36.710Z
2024-06-27T00:00:00.000
270836155
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/genes15070852", "pdf_hash": "d78cd449d3b814deb06ad57f8cd8e339c870a332", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41777", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b3ac4c9b9f04ef826c0dd7f8feb0a0717d35da2f", "year": 2024 }
pes2o/s2orc
Association of LPP and ZMIZ1 Gene Polymorphism with Celiac Disease in Subjects from Punjab, Pakistan Celiac disease (CD) is a complicated autoimmune disease that is caused by gluten sensitivity. It was commonly believed that CD only affected white Europeans, but recent findings show that it is also prevailing in some other racial groups, like South Asians, Caucasians, Africans, and Arabs. Genetics plays a profound role in increasing the risk of developing CD. Genetic Variations in non-HLA genes such as LPP, ZMIZ1, CCR3, and many more influence the risk of CD in various populations. This study aimed to explore the association between LPP rs1464510 and ZMIZ1 rs1250552 and CD in the Punjabi Pakistani population. For this, a total of 70 human subjects were selected and divided into healthy controls and patients. Genotyping was performed using an in-house-developed tetra-amplification refractory mutation system polymerase chain reaction. Statistical analysis revealed a significant association between LPP rs1464510 (χ2 = 4.421, p = 0.035) and ZMIZ1 rs1250552 (χ2 = 3.867, p = 0.049) and CD. Multinomial regression analysis showed that LPP rs1464510 A allele reduces the risk of CD by ~52% (OR 0.48, CI: 0.24–0.96, 0.037), while C allele-carrying subjects are at ~2.6 fold increased risk of CD (OR 3.65, CI: 1.25–10.63, 0.017). Similarly, the ZMIZ1 rs1250552 AG genotype significantly reduces the risk of CD by 73% (OR 0.26, CI: 0.077–0.867, p = 0.028). In summary, Genetic Variations in the LPP and ZMIZ1 genes influence the risk of CD in Punjabi Pakistani subjects. LPP rs1464510 A allele and ZMIZ1 AG genotype play a protective role and reduce the risk of CD. Introduction Celiac disease (CD) is a complex, autoimmune inflammatory disease affecting 3 million humans globally [1].Subjects suffering from CD are very sensitive to gluten protein, mainly present in wheat, rye, and barely.Upon exposure to gluten, intestinal cells are damaged due to the overactivation of the immune system, which results in diarrhea, fatigue, weight loss, and many other undesirable symptoms.The incidence rate of CD varies among countries; European countries have a higher CD prevalence as compared to the Asia-Pacific region.These differences in CD prevalence could be due to ethnic differences in genetic factors and per-capita wheat consumption [2,3].CD was often believed to only affect white Europeans, but new findings show that it is also prevailing in other racial groups, including South Asians, Caucasians, Africans, and Arabs [4]. Almost 0.7% of people in the United States and 1% of people in Europe are affected by CD [5,6].Rapidly changing dietary choices and lifestyles, along with advancements in diagnostic techniques, could be the reasons for the increasing incidence of CD.According to recent studies, the frequency of CD in Middle Eastern Arab countries ranges from 0.6 percent to 1.1 percent [7]. Patients with CD exhibit a broad range of symptoms, including intestinal and nonintestinal symptoms [8].However, intestinal symptoms are more commonly reported [9].CD is also one of the causes of malabsorption due to damaged intestinal cell lining, which can result in reduced growth and development in children; however, these side effects are less noticeable in adults. Intake of a gluten-free diet is the only cure/therapy for CD [10].Although a gluten-free diet effectively manages CD and helps improve the intestinal cell lining, some patients do not respond well to this dietary intervention and develop CD-related complications like intestinal adenocarcinoma, T-cell lymphoma, and refractory sprue [11]. Specific serological tests, histological analysis of duodenal biopsies, and a gluten-free diet are all necessary for the diagnosis of CD [7].Genetic testing for HLA susceptibility is increasingly used to diagnose celiac disease and assess family member risk [12].Observance of a strict gluten-free diet for life is the only known cure.Most people who follow a gluten-free diet observe that their symptoms return to normal after four weeks of dietary intervention [13].Serologic normalization of tissue transglutaminase antibodies, which can take a month to more than a year, is generally preceded by symptom improvement (resolution of diarrhea or reduction of abdominal discomfort), which is subsequently followed by improvement in histologic findings.As was already said, not everyone's intestines heal [14]. The innate immune system and adaptive immune systems are both involved in CD pathogenesis.Protein peptides from gluten enter the intestinal lamina propria [15], where they have decayed, been identified by antigen-presenting cells (APCs) through HLA class II molecules, and ultimately result in an abnormal CD4+ T cell-mediated immunological response [9].An integral part of CD pathophysiology is the formation of the peptide-HLA complex on APCs, which controls the transcription, configuration, and signaling preferences for the events implicated in celiac disease [16]. Along with environmental/dietary factors, genetic factors also contribute to the onset of CD [17].Approximately ~40% of CD risk is associated with the human leukocyte antigen (HLA) class II haplotype DQ2 or DQ8 [18].It is well reported that intestinal microbiome composition changes due to HLA haplotypes, and this dysbiosis stimulates gluten sensitivity [17].Till now, 39 non-HLA genes have been identified to increase the risk of CD [9].Some of the non-HLA genes overlap with other pathologies like Crohn's disease, type 1 diabetes, rheumatoid arthritis, and juvenile idiopathic arthritis.Lipoma's preferred partner protein is encoded by one of the non-HLA genes, LPP.Lipoma preferred partner (LPP) protein is localized in the cell periphery and interacts with paxillin in focal adhesions [19].It is also reported that LPP-associated paxillin is more frequently observed in focal adhesions of enterocytes in CD patients as compared to controls [20].Thus, alterations in cell shape and arrangement of the cytoskeleton in celiac enterocytes could be triggered by LPP, which is encoded by the LPP gene [20].Another protein that is a member of the protein inhibitor of the activated STAT (PIAS) family of proteins is encoded by the ZMIZ1 gene, which is considered a risk gene for vitiligo in the Chinese population [21] owing to its role in the proliferation and migration of melanocytes [22].Zinc finger MIZtype containing 1 (ZMIZ1) encoded by the ZMIZ1 gene regulates the activity of various transcription activators for smad3/4, Notch1, p53, and androgen receptors. It is also reported that genetic variations in LPP and ZMIZ1 can influence their function and increase the risk of CD and other diseases [23,24].In recent years, due to advancements in technology, several loci have been studied for their association with CD; among these, LPP and ZMIZ1 have been in focus [25,26].However, the genetic-based association of LPP and ZMIZ1 genes with increased CD risk is inconsistent [27][28][29].The prevalence of CD in Pakistan is unknown; however, according to clinicians, CD is quite common among children and adults in our clinical setting [30]. Apart from this, CD is the least studied pathology in Pakistan [31].Data obtained from several databases showed that only a few articles from Pakistan have been published, and most of these are published in national journals.These articles primarily focus on the clinical side of CD and the challenges of gluten-free diets.No article has been published on the genetics of Pakistani CD patients.Thus, the current study aimed to check the association of LPP and ZMIZ1 genetic variations with the risk of CD in Pakistani patients. Sample Collection and DNA Extraction This study was approved by the institutional (University of Sargodha, Pakistan) ethics review committee.The approval code is SU/ORIC/801 (data: 19 April 2022).All the procedures and protocols followed in the current study were in accordance with the Declaration of Helsinki.Informed oral/written consent was obtained from the patient or guardian.A total of 70 subjects were recruited from different areas of Punjab, Pakistan, and divided into two groups: Healthy control (n = 39) and Patients (n = 31).Healthy controls were without any gastroenterological conditions or gluten sensitivity.Celiac disease was diagnosed by an expert gastroenterologist based on gluten sensitivity and gastrointestinal problems.A tissue transglutaminase IgA (tTG-IgA) antibody test as well as endoscopy (extension of the crypts, partial to complete villous atrophy according to Marsh classification [32]) were used to confirm the disease diagnosis. After informed consent, a 3 mL venous blood sample was aseptically drawn by an expert phlebotomist in an EDTA-containing vacutainer.The phenol-chloroform-isoamyl alcohol method was used to extract DNA for genetic analysis.In this process, 1000 mL of tris-HCL (20 mM) was taken in the labeled Eppendorf tubes, and then we added 400 mL of blood sample, which was thawed at room temperature.Centrifuge the eppendorf tube at 13,200 rpm for 10 min.The pellet was saved, and the supernatant was discarded.A total of 500 mL of tri-HCL was added to the eppendorf tube containing the pellet after centrifugation.The pellet needs to break in 500 mL of tris-HCL.After completely breaking the pellet, the eppendorf was centrifuged at 13,200 rpm for 5 min.This washing step needs to be repeated again and again until we obtain a whitish or pinkish pellet.The next step is incubation.In this step, 375 µL of 0.2 M sodium acetate, 50 µL of SDS (10%), and 20 µL of proteinase K were added to the eppendorf containing the washed white or pink pellet.The pellet needs to be broken again into all these components.Incubate this mixture at 56 • C for 2 h or at 37 • C overnight.After the incubation, the next step was the separation of DNA from cell debris and proteins by using PCI (25 phenols, 24 chloroforms, and 1 isoamyl alcohol).For this purpose, 130 mL of PCI was added to an eppendorf containing the incubated mixture.Eppendorf is centrifuged again at 13,200 rpm for 10 min.Two layers appeared after centrifugation: the upper aqueous layer contained the DNA threads, while the lower reddish organic layer contained the proteins and cell debris.The upper aqueous layer containing DNA was transferred to another labeled eppendorf tube very carefully without disturbing the lower reddish layer.The next step was washing the DNA with ethanol.For this purpose, 1 mL of ice-cold absolute was added to the eppendorf containing the aqueous layer, which was then centrifuged at 13,200 rpm for 10 min.After centrifugation, DNA stuck to the bottom as a tiny pellet, so very carefully remove all the ethanol without disturbing the pellet.To remove all the protein contaminants from DNA, 500 µL of 70% ethanol is added again to the eppendorf, followed by 10 min of 13,200 rpm centrifugation.Again, very carefully remove 500 µL of 70% ethanol from the eppendorf without disturbing the pellet.Carefully place the eppendrof on the tissue by rotating the tube at 180 • so that any single droplet must be removed from the tube.After that, leave the open-lidded tube under the fan for proper air drying, so the remaining droplets must evaporate as well.Then, add 150 µL of sterile water to the tube and store the tube at −20 • C [33]. A questionnaire about demographic, clinical, and dietary details and medicinal history was also filled out for all enrolled subjects. Genetic Analysis The quality and quantity of extracted genomic DNA were assessed using 0.8% agarose gel electrophoresis and a NanoDrop TM 2000c spectrophotometer (ND 2000, Thermo Fischer Scientific, Waltham, MA, USA).Agarose gel electrophoresis was run for 30 min at 90 volts. Celiac disease (CD)-associated genetic variations of LPP and ZMIZ1 were searched in various databases like NCBI Pubmed and Google Scholar by using various keywords like genetic variations, LPP gene polymorphism, genetics of celiac disease, ZMIZ1 gene variations, and many more.A study of previously published literature summarized that LPP rs1464510 and ZMIZ1 rs1250552 were associated with the risk of CD.The majority of the studies reported their positive association; however, a few studies also reported that these polymorphisms had no role in CD.Thus, these polymorphisms were selected to study their association with CD in the Pakistani population. To analyze selected genetic variants (LPP rs1464510 and ZMIZ1 rs1250552), primers were designed for tetra amplification refractory mutation system polymerase chain reaction (T-ARMS PCR) by following the steps mentioned by Hussain et al. [34].Primer 1 software was used to design primers for our selected SNPs (LPP rs1464510 and ZMIZ1 rs1250552).Primer 1 comes with different possible primer combinations for the specific SNP.All the possible combinations of Primers for a SNP were checked by an oligo-analyzer for different parameters like GC content, Tm, length of primer, secondary structures, hetro-dimers, and molecular weight.After keenly observing all these characteristics, the best primer pairs were selected, which were further used for genetic analysis.GC contents between 40 and 60% are recommended.A GC content of 40-60% is recommended.If there is a secondary structure between the set of tetra primers, significant dimer amplification occurs, but no amplification product is generated.To avoid this kind of complexity in primer design, the internal hairpin should be less than G of −3.3 kcal/mol, the external hairpin should be less than G of −2 kcal/mol, and self-dimers less than G of −6 kcal/mol are acceptable at the three prime ends of the primers.T-ARMS PCR is a convenient, less laborious, and cost-effective genotyping assay.It works by using two pairs of primers: (1) inner primers and (2) outer primers.Outer primers are not allele specific; however, inner primers are allele specific.T-ARMS PCR only needs gel electrophoresis after the PCR reaction.Thus, it reduces the time and cost involved in genotyping by Sanger sequencing without compromising the specificity and accuracy of genotypes.The sequence of primers used for genotyping LPP rs1464510 and ZMIZ1 rs1250552 and the resulting product sizes are provided in Table 1.T-ARMS PCR primers were optimized for the genotyping of selected variations by using gradient PCR.After primer optimization, genotyping of all samples was carried out by T-ARMS-PCR using the primers enlisted in Table 1.Amplified products were resolved by horizontal gel electrophoresis using ethidium bromide added to a 1.5% agarose gel.A DNA ladder was used to estimate the size of the amplified product, and gel was visualized under UV light in the Gel Doc EZ system (Figure 1). Statistical Analysis Statistical analysis was carried out by SPSS software, version 20 (IBM Inc., New York, NY, USA).Allelic frequency was calculated by the gene counting method, while inferential statistics such as chi-square were employed to calculate the genotype and determine the association of LPP rs1464510 and ZMIZ1 rs1250552 with the risk of celiac disease.Multinomial regression analysis was used to quantify the disease risk associated with the studied polymorphisms.Odds ratios and 95% confidence intervals were calculated by multinomial regression analysis, which was adjusted for age and gender.Genotype was taken as an independent variable.Various models were applied for regression analysis.The dominant model checked the combined effect of homozygous recessive and heterozygous genotypes on the risk of CD in comparison to the homozygous dominant genotype.The codominant model checked the individual effects of homozygous recessive and heterozygous genotypes on CD risk in comparison to the homozygous dominant genotype.The recessive model compared the combined effect of homozygous dominant and heterozygous genotypes with homozygous recessive genotypes.The allelic model compared both alleles of LPP rs1464510 and ZMIZ1 rs1250552.A α-error of 5% was assigned for statistical significance.A p-value of <0.05 was considered statistically significant. Results LPP rs1464510 genotypic frequencies were significantly different between healthy controls and patients (χ 2 = 6.033, p = 0.049) (Table 2).Intragroup analysis of genotypic frequencies showed that the LPP rs1464510 AA genotype is more prevalent in health controls (51%) as compared to heterozygous AC genotypes (39%) and homozygous CC genotypes (10%).Contrary to this, in patients, the LPP rs1464510 heterozygous AC genotype was more prevalent (61%) as compared to the homozygous AA genotype (23%).Intergroup analysis showed the opposite trend in frequencies of the AA and AC genotypes in the healthy control and patient groups.The frequency of the AA genotype was nearly double in healthy controls as compared to patients (51% vs. 23%), while the frequency of the AC genotype was higher in patients (61% vs. 39%).Allelic frequencies also showed significant differences in the prevalence of the A and C alleles between healthy controls and patients (χ 2 = 4.421, p = 0.035).The frequency of the LPP rs1464510 A allele was higher in both study groups; however, it was more prevalent in the healthy group as compared to patients (71% vs. 53%) (Table 2).Higher frequencies of the LPP rs1464510 AA genotype and A allele in healthy controls could be an indication that this allele and genotype play a protective role for celiac disease. Genotypic frequencies of ZMIZ1 rs1250552 were not significantly different between study groups (χ 2 = 5.49, p = 0.064); however, allelic frequencies showed a border line difference (χ 2 = 3.867, p = 0.049).In healthy controls, allelic frequencies of the ZMIZ1 rs1250552 A and G alleles were the same, while in patients, the A allele was more prevalent (60%) than the G allele (40%). The association between celiac disease, LPP rs1464510, and ZMIZ1 rs1250552 was further explored and quantified by odds ratio (Table 3).The odds ratio was calculated by applying multinomial regression adjusted for age and gender.Several models were applied to check the CD risk associated with genotypes of LPP rs1464510 and ZMIZ1 rs1250552.Allelic model demonstrated that carriers of the LPP rs1464510 A allele have a 52% lower risk of developing CD as compared to carriers of the LPP rs1464510 C allele (OR 0.48, CI:0.24-0.96,p = 0.037). Furthermore, carriers of the LPP rs1464510 AA genotype also showed reduced risk of CD as compared to the carriers of the CC genotype in the co-dominant model (OR 0.28, CI: 0.058-1.35,p = 0.11) and dominant model (OR 0.68, CI: 0.15-2.60,p = 0.52); however, these results were not statistically significant.The LPP rs1464510 AC genotype showed a slightly higher risk of celiac disease in the co-dominant model (OR 1.01, CI: 0.23-4.45,p = 0.98).In line with these results, the recessive model demonstrated that rs1464510 C allele carriers are at 2.6 times higher risk of developing celiac disease as compared to the carriers of the AA genotype (OR 3.65, CI: 1.25-10.63,p = 0.017). For ZMIZ1 rs1250552, the allelic model showed that carriers of the G allele have a reduced risk of CD in comparison to carriers of the A allele (OR 0.47, CI: 0.22-1.00,p = 0.051).However, these findings could not reach statistical significance.In comparison to the allelic model, the dominant model showed that AG+GG genotype carriers have a 73% reduced risk of CD (OR 0.27, CI: 0.09-0.83,p = 0.022).The ZMIZ1 rs1250552 AG genotype also demonstrated a significant reduction in the onset of CD (OR 0.26, CI: 0.077-0.867,p = 0.028).While the recessive model exhibited an increased risk of CD in AA+AG genotype carriers, collectively, it could be said that the LPP rs1464510 A allele and the ZMIZ1 rs1250552 G allele protect against CD and reduce the risk of CD development. Discussion The current study aimed to check the association of LPP rs1464510 C>A and ZMIZ1 rs1250552 A>G with the risk of celiac disease (CD) in the Pakistani population.To the best of our knowledge, this study is one of the first to explore the genetics of CD in Pakistani subjects.Previously published studies have mainly focused on the clinical side and challenges of gluten-free diets. The current study demonstrated that the LPP rs1464510 A allele is more prevalent in healthy subjects, and carriers of the A allele are at reduced risk of developing CD as compared to individuals carrying the C allele.While allelic frequency for ZMIZ1 rs1250552 was the same in healthy controls for both alleles, the A allele was more prevalent in patients.Studied polymorphisms (LPP rs1464510 and ZMIZ1 rs1250552) are intronic variants and could be influencing the expression of LPP and ZMIZ1 proteins, which play an important role in maintaining cell physiology and activating transcription, respectively.The ZMIZ1 gene is also involved in immune function, which is central to the pathology of CD.LPP interacts with paxillin in focal adhesions and alters the cell shape and cytoskeleton arrangement in enterocytes.Microtubules also play an important role in maintaining the cell's shape.A recent study by Stricker et al. also demonstrated that posttranslational modification of microtubules disturbs cell morphology and promotes CD [35].Intact enterocytes are the primary requirement for proper digestion and absorption of nutrients.Nanayakkara et al. proved that gliadin (a type of gluten protein) peptides disturb the homeostasis of enterocytes by altering the cell shape, rearranging the cytoskeleton, increasing focal adhesions, and altering LPP cellular distribution [20].LPP is found to be more strongly activated in the enterocytes of celiacs as compared to healthy cells [36].This further strengthens the role of LPP in the onset of CD. Genetic studies have also shown that non-HLA genes also play a significant role in estimating the CD risk along with HLA genes (which account for 40% of the CD risk) [37].Various immune-related genes are also associated with CD [38].Sharma et al. have stated that the association of five non-HLA genes (TAGAP, IL18R1, RGS21, PLEK, and CCR9) with the risk of celiac disease varies with geographical differences [39].Various studies conducted on first degree relatives and siblings have shown that genetic variations in non-HLA genes can help in the assessment of celiac disease.These non-HLA genetic variants can modulate the immune response to gluten. Most of the genome-wide association studies (GWAS) have been conducted on European populations; no study has addressed the association of genetic variants with the risk of celiac disease in Asian populations.Various GWAS have proved that rs1464510 gene polymorphism in non-HLA LPP genes is a strong predictor of celiac disease in Swedish families [27,40] and the US population [41].However, in Italian families, LPP rs1464510 showed a moderate association with celiac disease [42].Some studies have also shown the involvement of rs1464510 in other immune-related diseases like cancer, vitiligo, diabetes, and rheumatoid arthritis [43].These co-morbidities are also influenced by the genetic variations of the ZMIZ1 gene [44,45].This gene is a strong risk predictor of vitiligo in the Chinese population.Celiac disease-associated changes in enterocytes also trigger variations in skin cells and lead to skin co-morbidities [46].In line with the findings of previous studies, our study also concluded that LPP rs1464510 and ZMIZ1 rs1250552 are associated with CD.LPP rs1464510 A allele protects from celiac disease, while the C allele increases the risk of celiac disease. This study provides insights into the genetics of celiac disease in the Asian population, and it is the first study from Pakistan reporting the association of the LPP rs1464510 A allele and the ZMIZ1 rs1250552 AG genotype with the risk of celiac disease.This information can also help clinicians diagnose celiac disease, which is tricky owing to the overlapping symptoms with other diseases.Clinicians can also use this genetic information to assess the risk of celiac disease in non-symptomatic children of a family with a history of celiac disease.Although this study has added important information to the present literature, more studies with larger sample sizes and more stringent inclusion and exclusion criteria should be conducted to verify this association in the Punjabi Pakistani population. Conclusions The current study summarizes that LPP rs1464510 and ZMIZ1 rs1250552 are associated with the risk of celiac disease.The LPP rs1464510 A allele is more prevalent in the Punjabi Pakistani population.Statistical analysis revealed that the LPP rs1464510 A allele and the ZMIZ1 rs1250552 AG genotype reduce the risk of celiac disease by 52% and 73%, respectively.LPP rs1464510 C allele increases the risk of celiac disease by 2.6 folds. Figure 1 . Figure 1.Gel electrophoresis of LPP rs1454510 and ZMIZ1 rs1250552: (a) LPP rs1464510; Lane L shows a 100 bp ladder.Lanes 1 and 3 show bands at 678 bp and 233 bp, which indicate a homozygous AA genotype.Lane 2 indicates homozygous CC genotyping by showing bands at 678 bp and 509 bp.Lane 4 shows heterozygous AC genotype, as three bands are present at 678 bp, 509 bp, and 233 bp.(b) ZMIZ1 rs1250552; lane L shows a 100 bp ladder.Lanes 1 and 3 did not show any amplification.Lanes 2 and 5 show homozygous GG genotype (646 bp and 316 bp bands).Lane 4 shows heterozygous AG genotypes (646 bp, 384 bp, and 316 bp bands).Lane 6 indicates a homozygous AA genotype (646 bp and 384 bp bands). Table 2 . Genotypic and allelic frequency of LPP rs1464510 and ZMIZ1 rs1250552 in the population of Punjab, Pakistan. AA represents homozygous wild type, AC and AG are heterozygotes and CC and GG are homozygous mutant. Table 3 . Multinomial regression analysis of rs1464510 for the risk of celiac disease.
v3-fos-license
2020-10-28T18:26:51.387Z
2020-09-09T00:00:00.000
225306219
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-163X/10/9/798/pdf", "pdf_hash": "32188c811f05949047871e36e24cafbd9e328822", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41778", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "sha1": "ec8babb091b0b92d0178d55be2393a51b503ffbb", "year": 2020 }
pes2o/s2orc
Uranium and Thorium Resources of Estonia We provide a compilation of geology of uranium and thorium potential resources in the Ordovician black shale (graptolite argillite), Cambrian–Ordovician shelly phosphorite and in the secondary resources (tailings) of Estonia. Historical and new geological, XRF and ICP-MS geochemical data and ArcGIS modeling results of elemental distribution and tonnages are presented. The Estonian black shale contains 5.666 million tons of U, 16.533 Mt Zn, 12.762 Mt Mo, 47.754 Mt V and 0.213–0.254 Mt of Th. The Estonian phosphate resources, altogether about 3 billion metric tons of phosphate ore, contain about 147,000 to 175,000 tons of U. Rare earth element concentrations in the phosphorite ore average at 1200–1500 ppm of ΣREE. Thorium can also be a possible co-product. The mining waste dump at the Maardu contains at least 3650 tons of U and 730 tons of Th. The Sillamäe radioactive waste depository contains about 1200 tons of U and 800 tons of Th. Due to the neighboring geological positions, as well as environmental constraints and mining technologies, the black shale and phosphorite can be treated as a complex multi-resource, possibly at the continental scale, which needs to be extracted together. Introduction There are no uranium or thorium deposits in the national registry of the Estonian mineral resources. This is partly because uranium has a very short history of usage and scientific studies, and partly because there are no major discoveries in Estonia of economically significant uranium deposits. At the same time, Estonia has had historical mining activity for uranium and beneficiation for many decades. After World War II, due to the atomic bomb "competition", the Soviet Union started to look for uranium deposits. As the Estonian graptolite argillite-non-metamorphosed black shale-was known to host uranium, a mine and uranium beneficiation plant were built in a small town Sillamäe, north-eastern Estonia in 1948 ( Figure 1). The plant operated as a Soviet top-secret facility until 1991. After Estonian independence, it was later privatized in 1997 and renamed "AS Silmet". At the moment the plant produces rare metals, rare earth metals and their compounds, and is the most important REE producer in the European Union. Uranium beneficiation in Sillamäe started in 1948. A total of 22.5 tons of elemental uranium was produced from about 271,575 tons of graptolite argillite from an underground mine near Sillamäe town [1]. Due to low uranium concentration and primitive technology, a large part of the uranium was left in solid waste. In 1952, the mining of Estonian graptolite argillite stopped and between 1950 and 1977 more than four million tons of the uranium ore was imported from Middle Asia and Eastern Europe, mostly from Czechoslovakia and East Germany. The estimated amount of elemental uranium produced from that resource was 25,000 tons. In addition to graptolite argillite, underlying shelly phosphorites and some Precambrian granitoids may be of interest, however, limited data of the Estonian Paleoproterozoic rock makes it impossible to assess uranium and thorium resources at the moment in the Estonian crystalline basement. It is well known that phosphates of sedimentary origin may have high concentrations of uranium. Estonian phosphorites, known as Obolus-sandstone or shelly phosphorite, and formed around 490 million years ago contain uranium in amounts of 10-70 ppm. Phosphorus-rich layers are formed in shallow water conditions, where shell-rich sand is accumulated. Estonia has the biggest phosphate reserves in Europe, however, not currently exploited. Phosphorite was mined in the Maardu deposit and adjacent areas from 1921-1991. In addition, human-made secondary resources, such as tailings and excavated overburden can be the subject of research for uranium and thorium resource assessments. In this paper, we provide a compilation of geology of uranium and thorium in black shale, phosphorite and in the secondary resources of Estonia. Minerals 2020, 10, x FOR PEER REVIEW 2 of 25 In addition to graptolite argillite, underlying shelly phosphorites and some Precambrian granitoids may be of interest, however, limited data of the Estonian Paleoproterozoic rock makes it impossible to assess uranium and thorium resources at the moment in the Estonian crystalline basement. It is well known that phosphates of sedimentary origin may have high concentrations of uranium. Estonian phosphorites, known as Obolus-sandstone or shelly phosphorite, and formed around 490 million years ago contain uranium in amounts of 10-70 ppm. Phosphorus-rich layers are formed in shallow water conditions, where shell-rich sand is accumulated. Estonia has the biggest phosphate reserves in Europe, however, not currently exploited. Phosphorite was mined in the Maardu deposit and adjacent areas from 1921-1991. In addition, human-made secondary resources, such as tailings and excavated overburden can be the subject of research for uranium and thorium resource assessments. In this paper, we provide a compilation of geology of uranium and thorium in black shale, phosphorite and in the secondary resources of Estonia. Materials and Methods Historical and new geological, XRF and ICP-MS geochemical data of the Estonian Ordovician black shale (graptolite argillite), Cambro-Ordovician shelly phosphorite and waste material (tailings) after uranium and phosphate production are combined in this overview (locations in Figure 1). Initial potential reserves for a limited number of elements have been calculated for graptolite argillite ( [2] and this study) and phosphorites (this study). As initial data, the combined database of 468 drill cores (Estonian Geological Survey and Estonian Land Board database, see at www.maaamet.ee [3]) were used for graptolite argillite calculations. Historical reports of phosphorite exploration typically contain relatively densely (vertically and laterally) populated data of P2O5, Fe2O3 and MgO concentrations, while no information is given about Materials and Methods Historical and new geological, XRF and ICP-MS geochemical data of the Estonian Ordovician black shale (graptolite argillite), Cambro-Ordovician shelly phosphorite and waste material (tailings) after uranium and phosphate production are combined in this overview (locations in Figure 1). Initial potential reserves for a limited number of elements have been calculated for graptolite argillite ( [2] and this study) and phosphorites (this study). As initial data, the combined database of 468 drill cores (Estonian Geological Survey and Estonian Land Board database, see at www.maaamet.ee [3]) were used for graptolite argillite calculations. Historical reports of phosphorite exploration typically contain relatively densely (vertically and laterally) populated data of P 2 O 5 , Fe 2 O 3 and MgO concentrations, while no information is given about secondary and trace elements; however, the exploration reports were often followed by a series of Minerals 2020, 10, 798 3 of 25 works concerning the complex exploitation of phosphorite and rocks in its overburden. Those works contain a limited amount of vertically sampled phosphorite trace elements data (ΣREE, Y, U, Th, Sr, occasionally Mo, Pb, Ag, etc.). As much as the workflow can be followed, the analyses were made from the duplicate samples of the initial exploration campaigns. Uranium as well as Th were measured mostly either by X-ray fluorescence or less frequently by instrumental neutron activation methods. Quality was assured by inter-laboratory testing in up to four laboratories with both identical and different methods. A dataset containing 873 phosphorite trace element analyses was compiled for serving the purposes of current work [4][5][6][7]. Weighted average concentrations were calculated for 160 drilled locations, excluding occasional black shale interlayers. The samples in Table 1 were recently analyzed in the ALS Geochemistry laboratory located in Outokumpu, Finland. Trace element composition is determined by ICP-MS after four acid digestion. Major element composition is determined by ICP-AES analyses after lithium borate fusion. All black shale samples were roasted before chemical treatment due to the high content of carbon and sulfur. Total organic carbon (TOC) content in black shale samples was determined by the LECO analyzer by infrared detection of CO 2 released from the combustion of the sample. Before infrared analyses, the samples were treated with HCl to remove carbonate carbon (inorganic carbon). The dataset was used for calculating raster models of lateral U and Th distribution as well as hypothetical reserves by the same methods applied to the graptolite argillite (GA) dataset [2]. Specific gravities from original phosphorite resource estimations reports were applied in the calculations (2.12 g/cm 3 and 2.26 g/cm 3 for the Toolse and Rakvere deposits, respectively). One artificial data point was extrapolated in the southernmost part of the Rakvere deposit. The historical data were compared with new data, although new data are only available for a limited number of samples. Recent, mostly unpublished trace element chemical analyses (XRF and ICP-MS) were used in the assessment of U and Th potential related to shelly phosphorites. The calculated amounts are preliminary and should be taken with precaution. Uranium and thorium contents in secondary resources were assessed based on historical data ( [8]; Sillamäe radioactive waste depository) or by a combination of historical and new geochemical analyses (Maardu overburden piles). The ArcGIS software (Versions: ArcMap 10.7 and ArcGIS PRO 2.5 ***; Environmental Systems Research Institute, Inc.-ESRI, Redlands, CA, USA) was used for generating elemental distribution maps and computing resource volumes. Results According to the classification by International Atomic Energy Agency (2018) [9]), the Estonian U-bearing rocks belong to several types: (1) the largest potential resource-Ordovician graptolite argillite belongs to Type 15.1-Black shale, Stratiform deposits; (2) Ordovician shelly phosphorites-Type 14.1 Phosphate, Organic Phosphorite; the third target-uranium and thorium potential in secondary resources-are not listed in the present (2018) Uranium Resources Classification system. There are two cases in Estonia where uranium-bearing rocks or material have been piled up as mining waste (overburden) or historical uranium processing tailings. As in several tailings in different countries, the uranium concentrations may be high, there is a potential need to update the Uranium Resources Classification to indicate this type of possible future resource as well. Uranium and Thorium in Black Shales Organic carbon-rich black shales have been studied regarding the industrial interest in a variety of metals, especially U, Mo, Zn, V, Ni, Cu, Cr, Co, Pb and Ag. These studies have shown a variety of metal sulfides in shales and suggested that sulfide minerals may be an integral part of the sediment diagenesis, e.g., [10][11][12][13]. Some black shales are significantly enriched by noble metals, occasionally coupled with Moand Ni-bearing shales. For instance, the Lower Cambrian black shales of southern China contain Minerals 2020, 10, 798 4 of 25 up to hundreds of ppb's PGE's and gold in strata deposited as individual, metal-rich sulfide layers, commonly 2-15 cm in thickness [14]. A number of these metals in the Fennoscandian-Baltoscandian black shales may be of commercial interest. In many Precambrian terrains, metamorphosed sedimentary rocks, which were initially black shales, are known and also may provide future economic interest. As an example, the Talvivaara mine in Sotkamo, Finland (previously owned by Talvivaara Mining Company Plc, now by Terrafame) is using black shale for the production of nickel, cobalt and zinc. The estimated resource is over one billion tons of ore. Technologically, it is the first mining operation in the world collectively recovering NiCoCuZn(Mn)(U) by bioheap leaching polymetallic black shale. As a co-product, they recently started the extraction of uranium [15]. These black shale layers have been historically exploited for local uranium production in Sweden and Estonia. For example, in the Viken area, Sweden, Continental Precious Minerals Inc. has estimated the uranium resource to be 1.163 Billion lbs. of U 3 O 8 (18 M lbs as indicated; 1.145 B lbs as an inferred resource) in Alum shale and a large number of other metals [26]. Kerogen in these black shales is of algal origin and the content of total organic carbon is mostly between 10-25 wt% [16]. The mineral matter of these black shales is dominated by clay minerals, illite-smectite and illite [22,27]. The high concentration of pyrite, which, together with kerogen, is thought to be the main carrier of some rare earth and other elements, is distinctive for black shale. The Alum shale and graptolite argillite form patches over extensive areas in the outskirts of the Baltica palaeocontinent [16]-Baltoscandia and Fennoscandia. A possible spatial continuity of those complexes is the graphitic phyllites that are found in the tectonically disrupted allochthonous and autochthonous Caledonian complexes in central and northern Sweden and Norway [28]. The metal-enriched phyllites exhibit similar geochemical signatures to the unmetamorphosed black shale of Baltoscandia [28]. These geochemical similarities may imply that organic-rich muds might have accumulated over a wide geographic area and probably under quite different depositional conditions-from shallow marine settings to continental slope environments. The black shales of Fennoscandia (Alum shale) and the graptolite argillite (GA) of Estonia can thus be treated as metal ore and a twofold energy source (including U, Th and hydrocarbons). This means that these rocks have, apart from scientific value, a significant economic value as well. Estonian Black Shale-Graptolite Argillite If we compare the Fennoscandian sedimentary and metamorphosed black shales and the Estonian graptolite argillite (GA), the stratigraphic characteristics and geological position of the Estonian GA are very simple ( Figure 2). Early Ordovician, organic-rich marine metalliferous black shale-graptolite argillite lies beneath most of northern Estonia. It has been named "Dictyonema shale", "Dictyonema argillite" or previously "alum shale"; however, the name "dictyonema" came from the benthonic root-bearing Dictyonema flabelliforme, which subsequently turned into planktonic nema-bearing Rhabdinopora flabelliformis [29]. More recently the term graptolite argillite is used in Estonia, while "Dictyonema shale" is still used in Russian literature. The Estonian graptolite argillite is commonly fine-grained, unmetamorphosed, horizontallylying and undisturbed, organic-rich (8-20%) lithified clay (Türisalu Formation), where the layers are usually 0.3 to 6 m thick. The graptolite argillite belongs to the group of black shales of sapropelic origin [2,23,24,30]. The Estonian GA crops out in many places in Northern Estonia, especially in the klint area and in several narrow river valleys ( Figure 2). As the Estonian Lower Palaeozoic sedimentary section, as well as the Precambrian rocks, are inclined towards the south due to its geological position in the southern slope of Fennoscandian Shield [31], at its southwest end the GA layers lie at a depth of more than 250 m. The Estonian GA is commonly characterized by high concentrations of U (up to 1200 ppm), Mo (1000 ppm), V (can be over 1600 ppm), Ni and several other heavy metals and is rich in N, S and O [2,22,23]. Examples of some major and trace element concentrations are shown in Table 1. These high concentrations of certain metals may be potentially useful and hazardous at the same time. In the Soviet Union, the GA was mined for uranium production at Sillamäe (see below, Section 3.3.). Between 1964 and 1991, approximately 73 million tons of graptolite argillite was mined and piled into waste heaps from a covering layer of phosphorite ore at Maardu, near Tallinn (see below, Section 3.3.). The Estonian graptolite argillite is commonly fine-grained, unmetamorphosed, horizontally-lying and undisturbed, organic-rich (8-20%) lithified clay (Türisalu Formation), where the layers are usually 0.3 to 6 m thick. The graptolite argillite belongs to the group of black shales of sapropelic origin [2,23,24,30]. The Estonian GA crops out in many places in Northern Estonia, especially in the klint area and in several narrow river valleys ( Figure 2). As the Estonian Lower Palaeozoic sedimentary section, as well as the Precambrian rocks, are inclined towards the south due to its geological position in the southern slope of Fennoscandian Shield [31], at its southwest end the GA layers lie at a depth of more than 250 m. The Estonian GA is commonly characterized by high concentrations of U (up to 1200 ppm), Mo (1000 ppm), V (can be over 1600 ppm), Ni and several other heavy metals and is rich in N, S and O [2,22,23]. Examples of some major and trace element concentrations are shown in Table 1. These high concentrations of certain metals may be potentially useful and hazardous at the same time. In the Soviet Union, the GA was mined for uranium production at Sillamäe (see below, Section 3.3.). Between 1964 and 1991, approximately 73 million tons of graptolite argillite was mined and piled into waste heaps from a covering layer of phosphorite ore at Maardu, near Tallinn (see below, Section 3.3.). While the reserves of Estonian graptolite argillite surpass those of Estonian kukersite (proper oil shale), it is of too poor quality for energy production at present. The GA calorific value ranges from 4.2-6.7 MJ/kg [22] and the Fischer Assay oil yield is 3-5% (for Estonian kukersite, it is about 30-47%, for example, [1]). The moisture content of fresh GA ranges from 11.9% to 12.5%, while the average composition of the combustible part is: C-67.6%, H-7.6%, O-18.5%, N-3.6% and S-2.6% [32]; however, considering that it is a low-grade oil source, its potential oil reserves are about 2.1 billion tons [1]. The specific gravity (bulk density) of Estonian GA varies between 1800 and 2500 kg/m 3 [30]. The pyrite content of GA is highly variable, ranging from 0.5% to 9.0%, averaging between 2.4% and 6%. Pyrite commonly forms fine-crystalline disseminations or thin interlayers and concretions of different sizes. The diameter of the pyrite concretions is usually 1.5-3 cm. It needs to be noted that some concretions are complex in a structure containing small crystals of sphalerite, galenite and/or calcite. A higher degree of sulfide mineralization within the GA may be associated with the occurrence of silt interbeds. These interbeds may contain a higher amount of other authigenic compounds such as phosphates (mainly apatite as biogenic detritus and nodules), carbonates (calcite and dolomite as cement and concretions), baryte and glauconite [23]. The concentration of sulfur ranges between 2-6%, from which about 0.6-0.8% is composed of organic matter, ca. 0.3% is sulphatic and the remaining part is sulfidic sulfur [33]. Estonian Graptolite Argillite Resources Most of the geological information on Estonian GA is obtained from basement mapping and different exploration projects, which were conducted by the Geological Survey of Estonia and its predecessor institutions, starting from the 1950s. The huge amount of information on the lithology and geochemistry of the GA was collected during the exploration of Estonia's phosphorite resources in the 1980s. The initial estimates of the total GA reserves in Estonia range from 60 [30] to 70 billion tons [1]; however, little is known about the previous calculation methods and the initial data (number of drill cores, etc.) that were used. Nevertheless, GA resource estimates were regularly reported in works dedicated to phosphorite exploration and its complex exploitation (Table 2), because GA was considered as a possible useful material in phosphorite overburden. In those works, resource estimates mostly were calculated by the panels method (blocking) [4]. The earliest partially preserved documentation of U resource estimates dates to 1944, when the information concerning radioactivity was checked and confirmed by geologists of the USSR's North-Western Geological Administration. The largest concentrations were recorded in the middle part of the deposit within the northeast of Estonia and the western part of St. Petersburg (Leningrad at the time). Uranium concentrations varied from tens to a few hundreds of ppm-s (80-750 ppm U). In addition to uranium in the shale, elevated molybdenum and vanadium contents were also recorded. Since 1945, the Baltic Geological Expedition identified 14 deposits of low-grade uranium ore-Sillamäe (5464 t U, 260 ppm average U), Toila (7000 t U, 250 ppm average U), Aseri, Saka and others in the present-day Russian territory. The deposits were explored by boreholes along networks of 250 m × 250 m and 125 m × 125 m. A total resource of 72,000 t U was reported in the Baltic region. Alongside uranium exploration, a black shale enrichment technology was developed to produce uranium concentrates from the rock. As mentioned briefly, the efforts of estimating GA resources and U contained within were almost uniquely associated with phosphorite exploration. The highest resources of U were attributed to GA in the overburden of Aseri deposit phosphorite ( Table 2). Even though GA is wedging out in the territory of the Rakvere deposit, U resources were estimated in its northern parts where the higher U concentrations occur. A somewhat more independent GA exploration was finalized in 1989 when resources in western Estonia (highest shale thickness) were calculated with the focus on hydrocarbons, but also for U, V and Mo. The conditionally exploitable proportion of GA was evaluated based on a minimum resource thickness of 1.6 m, and a minimum calorific value of 1450 kcal/kg (~6.1 MJ) [34]. Very little data have been collected in the last three decades; the modern GIS-based methods now allow us to obtain better estimates of the total resource and visualize metal distribution. In recent research, the verified database of 468 drill cores (database of the Estonian Geological Survey and Estonian Land Board, [3]) has been used as the initial data. The estimated area of the Estonian GA on the mainland and islands is 12,212.64 km 2 , with a corresponding volume of 31,919,259,960 m 3 [2]. For comparison, Estonian oil shale-kukersite-occupies an area of 2884 km 2 , and its reserves (proven plus probable) are about 5 billion tons [35]. To calculate the total mass of the GA, the value of the specific gravity (density) is required. It is known [30] that the density of the graptolite argillite varies to a great degree, between 1800 and 2500 kg/m 3 . So, assuming an average density of 1800 kg/m 3 , the total mass of GA is about 57.45 billion tons, while in the case of 2500 kg/m 3 the mass is 79.80 billion tons. Assuming the average density to be 2100 kg/m 3 , the total weight of GA is about 67 billion tons, which is in accordance with the earlier estimates of 60 to 70 billion tons. It should be noted, however, that in the eastern part of the GA basin there are frequent silty interlayers, resulting in a lower specific gravity of about 1850 kg/m 3 , which has also been the basis of deposit-specific calculations in Table 2 [4]. In the Toolse deposit, the average specific gravity of 1900 kg/m 3 was applied [36]. For Western Estonia, the exploitable proportion of GA was based on a minimum resource thickness of 1.6 m, and a minimum calorific value of 1450 kcal/kg. The average specific gravity was 2080 kg/m 3 , but variations were considered in the calculations [34]. The distribution of several metals such as U, Zn, Mo and V in the GA has been modeled earlier [2]. The initial data were selected from the databases of the Geological Survey and the Estonian Land Board. It needs to be mentioned that the metal concentration data represents the average metal concentrations of the GA drill core. There are differences in metal distributions-the central and western regions of eastern Estonia show the highest concentrations for V and Mo, whereas V is also high in the southern region of the Eastern and central Estonia. Uranium shows the highest contents in the eastern part of Estonia, while in Western Estonia the concentrations show medium values, while the lowest values are typical for central Estonia (Figure 3). Minerals 2020, 10, x FOR PEER REVIEW 8 of 25 Uranium distribution has not been modeled in the Estonian islands due to the small number of available analyses. It is also likely that the high concentrations in the southwest area may be an artefact of the model since there are only a few drill cores available but they show locally high contents of U. Based on these data, it can be concluded that the concentration of most of the metals (except Zn) is relatively low in central Estonia. It is also important to emphasize that the available geochemical data are relatively unevenly distributed across the area and the present geochemical generalization is informative but must be taken with caution. In addition, there are very little data on the southern margin of the GA area; however, we believe that due to its limited thickness of GA (less than 0.5 m), the calculated total elemental amounts have not much affected the calculations. Concerning the shale standard values (PAAS and NASC), the Estonian GA is very rich in U and V. For instance, the average U concentration in the Saka section (267 ppm) is a hundred times higher than the corresponding values for NASC [23]. In the case of V, there is a nine-fold difference between the concentrations in NASC and the average concentrations detected, for example, in the Saka section, in Eastern Estonia (1190 ppm; [23]). In general, the U content of GA shows quite a strong positive correlation with the organic matter content, which most likely indicates early fixation via metal- Uranium distribution has not been modeled in the Estonian islands due to the small number of available analyses. It is also likely that the high concentrations in the southwest area may be an artefact of the model since there are only a few drill cores available but they show locally high contents of U. Based on these data, it can be concluded that the concentration of most of the metals (except Zn) is relatively low in central Estonia. It is also important to emphasize that the available geochemical data are relatively unevenly distributed across the area and the present geochemical generalization is informative but must be taken with caution. In addition, there are very little data on the southern margin of the GA area; however, we believe that due to its limited thickness of GA (less than 0.5 m), the calculated total elemental amounts have not much affected the calculations. Concerning the shale standard values (PAAS and NASC), the Estonian GA is very rich in U and V. For instance, the average U concentration in the Saka section (267 ppm) is a hundred times higher than the corresponding values for NASC [23]. In the case of V, there is a nine-fold difference between the concentrations in NASC and the average concentrations detected, for example, in the Saka section, in Eastern Estonia (1190 ppm; [23]). In general, the U content of GA shows quite a strong positive correlation with the organic matter content, which most likely indicates early fixation via metal-organic complexes. At the same time, a correlation of P 2 O 5 contents with other trace elements, such as U, was not detected. Nevertheless, it is well established that U is more enriched in the bottom of the black shale bed, at least in the western part of Estonia. Recent ICP-MS geochemical data from 11 north-western Estonian drill cores were divided into upper and lower beds, which revealed a reverse trend with regard to U association with P 2 O 5 and total organic carbon (TOC) [38]. Namely, the correlation of U and TOC is weakly positive in the upper bed and reverses to weakly negative in the lower bed (see Section 4, Discussion). The opposite was observed in the relation between P 2 O 5 and U-their association becomes more relevant in the lower, U-enriched bed. Hierarchical clustering of dataset classifies U within the group of typical redox-sensitive elements such as Mo, Sb, V, Re as well as TOC, Ag, Pb and Te. While average metal concentrations are very useful in dividing GA into "poor" and "rich" deposits, the total content of a certain metal depends on the thickness of the GA bed. To calculate the total amount of the metal on the square meters, the ESRI ArcGIS software was employed. As an example, the total tonnage of U in the Estonian GA is shown in Figure 3B. The presented model is based on: (1) the element grid which shows the element distribution, in ppm; (2) an interpolated grid of the GA thickness, in meters; (3) assumption of the average density 2100 kg/m 3 ; (4) since the element and thickness grids were calculated with the cell size of 400 m × 400 m, the same cell size was used for the calculation of the total amount of element. These calculations provide more realistic total amounts for the elements in the Estonian GA (not just based on an average concentration value in ppm). The calculated total tonnage of U is about 5.6656 million tons (6.6796 million tons as U 3 O 8 ). The calculated Zink tonnage is 16.5330 million tons (20.5802 million tons as ZnO) and Mo is 12.7616 million tons (19.1462 million tons as MoO 3 ). The highest studied element amounts show a somewhat similar pattern-western Estonia has the highest potential, especially for U and Mo; however, there are also distinctions between those elements. For example, in central Estonia, where the enrichment is the lowest for most elements, except Zn. Presently, the calculation for thorium and vanadium, based on the cell of 400 m × 400 m, was conducted (for Th, see Figure 4). Vanadium concentrations (as drill core averages; 469 drill cores) commonly vary in between 190 and 1700 ppm, being in certain layers as high as 4500 ppm [5]. The calculated vanadium tonnage is 47.7538 million tons. Thorium concentrations are analyzed in less than 100 drill cores, where the average varies between less than 1 and 17 ppm. As there are very few measurements of Th concentrations in western and eastern Estonia, the distribution and tonnage calculations can be provided only for the central part of Estonia (Figure 4). The concentrations of Th are higher in the southern part and the central part of the calculated area; however, this distribution model should be taken with care since the number of measurements in the southern part is very low. Depending on the area of the calculation, the Th tonnage is 213,000 to 254,300 tons (in Figure 4, the tonnage of Th is 213 thousand tons). Uranium and Thorium in Cambro-Ordovician Shelly Phosphorites It is well known that phosphates of sedimentary origin have higher concentrations of uranium (100-150 ppm U) and lower concentrations of thorium (20-35 ppm Th), whereas phosphates of magmatic or metamorphic origin have higher thorium (40-120 ppm Th) and lower uranium (25-35 ppm U; [9]). The world's largest phosphate deposits consist of marine phosphorite of continental shelf origin that may contain syn-sedimentary stratiform, disseminated uranium in fine-grained apatite. Historically, uranium has been recovered in the USA, Belgium and Israel as a by-product of phosphate production. For instance, in Florida, about 17,275 t of uranium was produced between 1978 and 2000 from eight production centers [9]. The US Geological Survey dataset of world phosphate mines, deposits and occurrences lists 1635 deposits, most of these marine phosphorites. As of 2015, only 57 phosphate deposits were listed in the UDEPO (World Distribution of Uranium Deposits) database, suggesting that uranium resources in phosphorites could be much more significant than previously thought [39]. Thus, phosphorites commonly contain very large uranium resources, albeit at a very low grade (<50-200 ppm U). World uranium resources in phosphorites are estimated at 15-20 million tons [9]. Well-known examples of this type of phosphorite deposits include districts in Florida and the Phosphoria Formation (USA) and the Gantour, Meskala and Oulad Abdoum Basins (Morocco). Other types of phosphorite deposits consist of organic phosphate, including sandy to argillaceous marine sediments enriched in fish and/or shell remains that are uraniferous. The Estonian phosphorites belong to this type of deposit where phosphorus, but also uranium and thorium, are associated with Cambrian-Ordovician brachiopod shells. Unfortunately, very little information about the Estonian phosphorites is available in international literature. The total resource is estimated at more than 800 million tons P2O5 [40] or 3 billion metric tons as phosphate ore [41]. The World's phosphate rock reserves are estimated at 70 billion tons [39]. According to the present knowledge, Estonia holds, in Europe, the largest unused sedimentary phosphate rock reserves. The Estonian phosphorite is known as Obolus-sandstone (shelly phosphorite, [42]). It was formed approximately 488 million years ago and occurs in the Upper Cambrian and Lower Ordovician boundary beds. In the Estonian basement, it is known as the Kallavere, Tsitre, Ülgase and Uranium and Thorium in Cambro-Ordovician Shelly Phosphorites It is well known that phosphates of sedimentary origin have higher concentrations of uranium (100-150 ppm U) and lower concentrations of thorium (20-35 ppm Th), whereas phosphates of magmatic or metamorphic origin have higher thorium (40-120 ppm Th) and lower uranium (25-35 ppm U; [9]). The world's largest phosphate deposits consist of marine phosphorite of continental shelf origin that may contain syn-sedimentary stratiform, disseminated uranium in fine-grained apatite. Historically, uranium has been recovered in the USA, Belgium and Israel as a by-product of phosphate production. For instance, in Florida, about 17,275 t of uranium was produced between 1978 and 2000 from eight production centers [9]. The US Geological Survey dataset of world phosphate mines, deposits and occurrences lists 1635 deposits, most of these marine phosphorites. As of 2015, only 57 phosphate deposits were listed in the UDEPO (World Distribution of Uranium Deposits) database, suggesting that uranium resources in phosphorites could be much more significant than previously thought [39]. Thus, phosphorites commonly contain very large uranium resources, albeit at a very low grade (<50-200 ppm U). World uranium resources in phosphorites are estimated at 15-20 million tons [9]. Well-known examples of this type of phosphorite deposits include districts in Florida and the Phosphoria Formation (USA) and the Gantour, Meskala and Oulad Abdoum Basins (Morocco). Other types of phosphorite deposits consist of organic phosphate, including sandy to argillaceous marine sediments enriched in fish and/or shell remains that are uraniferous. The Estonian phosphorites belong to this type of deposit where phosphorus, but also uranium and thorium, are associated with Cambrian-Ordovician brachiopod shells. Unfortunately, very little information about the Estonian phosphorites is available in international literature. The total resource is estimated at more than 800 million tons P 2 O 5 [40] or 3 billion metric tons as phosphate ore [41]. The World's phosphate rock reserves are estimated at 70 billion tons [39]. According to the present knowledge, Estonia holds, in Europe, the largest unused sedimentary phosphate rock reserves. The Estonian phosphorite is known as Obolus-sandstone (shelly phosphorite, [42]). It was formed approximately 488 million years ago and occurs in the Upper Cambrian and Lower Ordovician boundary beds. In the Estonian basement, it is known as the Kallavere, Tsitre, Ülgase and Petseri Formations. Phosphorus-rich layers were formed in shallow water conditions, where shell-rich sand is accumulated by the continuous displacement caused by waves and currents. This means that phosphorite was formed as separate zones, unlike oil shale, which covers the whole distribution area. Estonian phosphorite can be described as yellowish-light or dark-grey fine-or coarse-grained slightly cemented sandy deposit ( Figure 5). Usually, it is weakly cemented or friable, strongly cemented phosphorites are less spread [42]. Petseri Formations. Phosphorus-rich layers were formed in shallow water conditions, where shellrich sand is accumulated by the continuous displacement caused by waves and currents. This means that phosphorite was formed as separate zones, unlike oil shale, which covers the whole distribution area. Estonian phosphorite can be described as yellowish-light or dark-grey fine-or coarse-grained slightly cemented sandy deposit ( Figure 5). Usually, it is weakly cemented or friable, strongly cemented phosphorites are less spread [42]. Estonian phosphorite contains 10.5-10.6% P2O5, ca 16.0% CaO, 0.8% MgO, 0.5-0.7% Al2O3, 1.8-1.9% Fe2O3, 64.0-65.0% SiO2 and F 0.8%. Phosphate rocks free from impurities contain 28.2-28.8% P2O5, some studies show even higher P2O5 contents-about 33.6-35.5% [21]. The P2O5 content in the phosphorite layer is commonly in a range of 6-20%. Concentrations of major and minor components of some phosphorite samples are shown in Table 3. Table 3. Major (wt%) and trace (ppm) elements of some phosphorite samples. LOI-Loss on ignition. Bdl-below detection limit. Data sourced from the FRAME project [43]. Locations are shown in Figure 1. In the vertical sections, there are no universally definitive trends in the distribution of U ( Figure 6), although certain co-variability with P2O5 is noted, as is also implied from the biplot of U and P2O5 of the historical dataset (Figure 7). In the Toolse deposit, the richest phosphorite beds are the lowest and higher enrichment of U can occur there. In the Rakvere deposit, there appears to be a two-layered Estonian phosphorite contains 10.5-10.6% P 2 O 5 , ca 16.0% CaO, 0.8% MgO, 0.5-0.7% Al 2 O 3 , 1.8-1.9% Fe 2 O 3 , 64.0-65.0% SiO 2 and F 0.8%. Phosphate rocks free from impurities contain 28.2-28.8% P 2 O 5 , some studies show even higher P 2 O 5 contents-about 33.6-35.5% [21]. The P 2 O 5 content in the phosphorite layer is commonly in a range of 6-20%. Concentrations of major and minor components of some phosphorite samples are shown in Table 3. Table 3. Major (wt%) and trace (ppm) elements of some phosphorite samples. LOI-Loss on ignition. Bdl-below detection limit. Data sourced from the FRAME project [43]. Locations are shown in Figure 1. In the vertical sections, there are no universally definitive trends in the distribution of U (Figure 6), although certain co-variability with P 2 O 5 is noted, as is also implied from the biplot of U and P 2 O 5 of the historical dataset (Figure 7). In the Toolse deposit, the richest phosphorite beds are the lowest and higher enrichment of U can occur there. In the Rakvere deposit, there appears to be a two-layered phosphorite accumulation, which is not currently defined in the resource models and is treated as a single layer. Generally, the highest P 2 O 5 concentrations are noted in the upper part of the ore layer [40]. The same trend is at least partly observed in the distribution of U concentrations. The content of U in phosphorite is about an order of magnitude lower than in the overlying black shale. Sample Minerals 2020, 10, x FOR PEER REVIEW 12 of 25 phosphorite accumulation, which is not currently defined in the resource models and is treated as a single layer. Generally, the highest P2O5 concentrations are noted in the upper part of the ore layer [40]. The same trend is at least partly observed in the distribution of U concentrations. The content of U in phosphorite is about an order of magnitude lower than in the overlying black shale. The regional overview of average U contents reflects the areas, where phosphorite deposits are located (Figure 8). There are occurrences of phosphorite in the western part of the country, but they are of very low grade, and this is also reflected in the lower concentrations of U (~5-15 ppm). The three easternmost data points are somewhat biased, because in those cases, only single very rich phosphorite samples were analyzed. phosphorite accumulation, which is not currently defined in the resource models and is treated as a single layer. Generally, the highest P2O5 concentrations are noted in the upper part of the ore layer [40]. The same trend is at least partly observed in the distribution of U concentrations. The content of U in phosphorite is about an order of magnitude lower than in the overlying black shale. The regional overview of average U contents reflects the areas, where phosphorite deposits are located ( Figure 8). There are occurrences of phosphorite in the western part of the country, but they are of very low grade, and this is also reflected in the lower concentrations of U (~5-15 ppm). The three easternmost data points are somewhat biased, because in those cases, only single very rich phosphorite samples were analyzed. The regional overview of average U contents reflects the areas, where phosphorite deposits are located (Figure 8). There are occurrences of phosphorite in the western part of the country, but they are of very low grade, and this is also reflected in the lower concentrations of U (~5-15 ppm). The three easternmost data points are somewhat biased, because in those cases, only single very rich phosphorite samples were analyzed. There appears to be a trend that the northernmost phosphorite deposits area is the most enriched in U (~25-50 ppm; Figure 9). There are only a few locations where U concentration exceeds 25 ppm, which are in the middle and southern parts of the phosphorite occurrence area (Rakvere deposit). The mean concentrations of U are 24 ± 6 ppm and 15 ± 4 ppm in the Toolse and Rakvere deposits, respectively. The overall weighted average across all Estonian phosphorite samples is 16 ± 11 ppm, implying a high variability, but generally a low amount of U. The formerly shown errors are given as one standard deviation. Perspective resource estimates associated with phosphorite are given in Table 4, and an extended overview in Appendix A. Thorium seems to possess a similar trend-being more concentrated in the northernmost part There appears to be a trend that the northernmost phosphorite deposits area is the most enriched in U (~25-50 ppm; Figure 9). There are only a few locations where U concentration exceeds 25 ppm, which are in the middle and southern parts of the phosphorite occurrence area (Rakvere deposit). The mean concentrations of U are 24 ± 6 ppm and 15 ± 4 ppm in the Toolse and Rakvere deposits, respectively. The overall weighted average across all Estonian phosphorite samples is 16 ± 11 ppm, implying a high variability, but generally a low amount of U. The formerly shown errors are given as one standard deviation. Perspective resource estimates associated with phosphorite are given in Table 4, and an extended overview in Appendix A. which are in the middle and southern parts of the phosphorite occurrence area (Rakvere deposit). The mean concentrations of U are 24 ± 6 ppm and 15 ± 4 ppm in the Toolse and Rakvere deposits, respectively. The overall weighted average across all Estonian phosphorite samples is 16 ± 11 ppm, implying a high variability, but generally a low amount of U. The formerly shown errors are given as one standard deviation. Perspective resource estimates associated with phosphorite are given in Table 4, and an extended overview in Appendix A. Thorium seems to possess a similar trend-being more concentrated in the northernmost part of the phosphorite deposits area ( Figure 10). Yet, this indication is very speculative because there are too few data points situated in the Rakvere deposit area. It can be mentioned here that the northernmost part of the Toolse deposit also yields comparatively higher total rare earth element (REE) concentrations. Thorium seems to possess a similar trend-being more concentrated in the northernmost part of the phosphorite deposits area ( Figure 10). Yet, this indication is very speculative because there are too few data points situated in the Rakvere deposit area. It can be mentioned here that the northernmost part of the Toolse deposit also yields comparatively higher total rare earth element (REE) concentrations. Recent studies have shown relatively enriched but variable concentrations of REEs in single phosphate shells. For instance, La in a single shell ranges from 50 to 550 ppm, Ce-40-1200 ppm, Pr-4-170 ppm, Nd-20-800 ppm, Sm-3-180 ppm and Gd-4-135 ppm [44]. The total REEs can reach 3000 ppm; however, on average they range between 1000 and 2000 ppm. At the moment the Estonian phosphorites cannot be regarded as an economic REE source, but considering REEs as a coproduct of phosphorus production, it may economically be feasible. Considering the possibility that these resources will be developed for phosphorus (and REE) production, it is important to focus also on uranium and thorium; however, the contents of U and Th are quite low to provide economical interest but high enough in environmental meaning. The new analyses show that uranium content in the ore ranges between 2 and 91 ppm (52 samples), whereas in single shells it ranges between 0.5 to 159 ppm (41 samples; Figure 11). Thorium content in ore ranges between 1 to 15 ppm and in shells 0.5-48 ppm. It needs to be emphasized that in both single shells and ore, uranium and thorium contents are spatially very heterogeneous. Considering that future possible mining operations will have an annual production of 5 Mt tons of ore with average U and Th contents as shown in Table 5, the total uranium content reaches above Recent studies have shown relatively enriched but variable concentrations of REEs in single phosphate shells. For instance, La in a single shell ranges from 50 to 550 ppm, Ce-40-1200 ppm, Pr-4-170 ppm, Nd-20-800 ppm, Sm-3-180 ppm and Gd-4-135 ppm [44]. The total REEs can reach 3000 ppm; however, on average they range between 1000 and 2000 ppm. At the moment the Estonian phosphorites cannot be regarded as an economic REE source, but considering REEs as a co-product of phosphorus production, it may economically be feasible. Considering the possibility that these resources will be developed for phosphorus (and REE) production, it is important to focus also on uranium and thorium; however, the contents of U and Th are quite low to provide economical interest but high enough in environmental meaning. The new analyses show that uranium content in the ore ranges between 2 and 91 ppm (52 samples), whereas in single shells it ranges between 0.5 to 159 ppm (41 samples; Figure 11). Thorium content in ore ranges between 1 to 15 ppm and in shells 0.5-48 ppm. It needs to be emphasized that in both single shells and ore, uranium and thorium contents are spatially very heterogeneous. interest but high enough in environmental meaning. The new analyses show that uranium content in the ore ranges between 2 and 91 ppm (52 samples), whereas in single shells it ranges between 0.5 to 159 ppm (41 samples; Figure 11). Thorium content in ore ranges between 1 to 15 ppm and in shells 0.5-48 ppm. It needs to be emphasized that in both single shells and ore, uranium and thorium contents are spatially very heterogeneous. Considering that future possible mining operations will have an annual production of 5 Mt tons of ore with average U and Th contents as shown in Table 5, the total uranium content reaches above 120 tons and thorium reaches above 27 tons. It would be interesting to add that total REE contents for this annual mining amount will be 720 tons at ΣREE = 1200 ppm, or 900 tons at ΣREE = 1500 ppm. Concentrations of U and Th in single brachiopod shells and in brachiopod sandstone-phosphorite ore as analyzed by ICP-MS. Unpublished data from the University of Tartu and Tallinn University of Technology. Considering that future possible mining operations will have an annual production of 5 Mt tons of ore with average U and Th contents as shown in Table 5, the total uranium content reaches above 120 tons and thorium reaches above 27 tons. It would be interesting to add that total REE contents for this annual mining amount will be 720 tons at ΣREE = 1200 ppm, or 900 tons at ΣREE = 1500 ppm. Table 5. Minimum, maximum, average and standard deviation of U and Th contents in analyzed brachiopod shells and phosphate ore. As an example, U and Th contents for mining activity with different annual amounts have been calculated as a possible co-product. Secondary Uranium and Thorium Resources There are two cases in Estonia where mining waste rock and processing tailings may provide a target for future studies of uranium and thorium resources: (1) piled up overburden during the phosphorite mining in Maardu, northern Estonia and (2) the Sillamäe radioactive waste depository in north-eastern Estonia (Figure 1). Graptolite Argillite Overburden Dumps in Maardu During the opencast mining of phosphorite in 1964-1991, some tens of millions of tons of graptolite argillite were mined from the covering layer of phosphorite ore at Maardu town, near Tallinn. As an overburden, graptolite argillite was mixed up with other overlying rocks, including carbonate rocks (limestone), sandstone, glauconite sandstone and Quaternary sediments. The material was concentrated into large waste dumps. For example, in 1989, opencast mining at Maardu was carried out on more than 6 km 2 [45]. The mining and processing stopped in 1991. At present, the waste dumps at Maardu contain about 73 million tons of graptolite argillite [45,46]. The average uranium contents in graptolite argillite sections in this area vary between 42 to 60 ppm, with the highest values of 300-450 ppm in some sub-layers. If taking an average of 50 ppm uranium in the waste dump, the total uranium content can be as high as 3650 tons. There are only a few thorium analyses in the area ranging from 5 to 14 ppm. The total thorium content, averaging to 10 ppm, in the waste dumps is then 730 tons. These values are not high if considering the waste as a natural rock; however, these dumps are an environmental concern. Under normal weathering conditions, the graptolite argillite is easily oxidized, and spontaneous combustion can happen. In some places at Maardu, the temperatures in the heap occasionally exceeded 500 • C [45]. It has been noticed that spontaneous combustion can occur in heaps that are both a few months as well as over 30 years old leading to the conclusion that some old heaps can still be dangerous. These processes lead to an annual leaching of 1500 tons of mineral matter per square kilometer of a waste dump and the wastewater being discharged into the Maardu lake nearby [47]. The graptolite argillite is also a major source for radon (Rn) in northern Estonia. Very high radon concentrations up to 10,000 Bq/m 3 have been recorded at some natural outcrops. This means that it is necessary to deal with these dumps in the near future. As graptolite argillite, apart from U, is also very enriched in several other metals (see above), one possible scenario is to extract most of the commodity metals, including uranium. A complex metal extraction may be the key to an economically feasible rehabilitation of these dumps. The Radioactive Waste Depository in Sillamäe, Northeastern Estonia The Soviet Union started uranium mining and production in the top-secret industrial town of Sillamäe, north-eastern Estonia in 1947. From 1947 to 1952, about 270,000 tons of graptolite argillite was mined from an area of 5 hectares at the coastal cliff at Türsamäe, near Sillamäe. The uranium beneficiation plant in Sillamäe was established in 1948 and about 22.5 tons of elemental uranium was produced from Estonian graptolite argillite. The technology was primitive and provided only about 50% U recovery, as a result, a large part of the uranium was left in the solid waste. This production was found to be inefficient and the factory switched to other raw materials in 1950, but the uranium beneficiation process was stopped completely only in 1989 [48]. From 1971 to 1989, pre-processed uranium ore was imported. The estimated amount of elementary uranium in the U 3 O 8 concentrate produced was 74,000 tons. Additionally, from 1982 to 1989, 1350 tons of UO 2 , containing 40-80% uranium, was imported to Estonia. In 1970, the processing of loparite, obtained from the Kola Peninsula, commenced. Loparite has high concentrations of tantalum, niobium and other metals, but also yielding uranium and thorium. It also contains rare earth metals, which were left in the waste at the beginning of the processing. Vast amounts of different ore and concentrates ended up as waste rock that was collected in the waste depository near Sillamäe, close to the coast of the Gulf of Finland. After uranium processing ceased in Estonian in 1989, industrial activity at Sillamäe experienced a significant decline throughout the 1990s. Finally, the Sillamäe plant was privatized in 1997 to form a company "AS Silmet", which continued to produce rare metals (tantalum, niobium) and rare earth metal products. The plant-now named Neo Performance Materials Silmet AS-remains the top world producer of niobium and tantalum products including hydroxides, oxides, various grades of metal, metal hydrides, metal powders and NbNi alloy. Among rare earth element products are lanthanum, cerium, praseodymium, neodymium and samarium-europium-gadolinium carbonates, oxides, metals, chloride and nitrate solutions. It has been estimated that the Sillamäe radioactive waste depository contains about 4-6 million tons of uranium processing residuals, 1.5 million tons of oil-shale ash and waste from processing about 140,000 tons of loparite [8]. Calculations show that the remediated depository contains 1200 tons of uranium, 800 tons of thorium and 4.4 × 10 15 Bq (1.2 × 10 5 Ci) of naturally occurring radionuclides (decay products of U and Th) of which about 3 × 10 14 Bq (7000 Ci) is 226 Ra [8]. Metals in Graptolite Argillite The vertical and lateral geochemical heterogeneity in the Estonian graptolite argillite (GA) has not been well understood, particularly the scale of the heterogeneity and specific distribution patterns of elements. Recently, a study on vertical geochemical heterogeneity based on two cross-sections has shown distinctive differences between the eastern and western parts of the GA [23]. The previous geochemical explorations revealed that the studied sequences demonstrate pronounced vertical variations in U, V, Mo, Zn and other element concentrations. A common distinctive feature of the sections is the occurrence of the highest concentrations of the elements in the lower half of the section. Trace metal enrichment in black shales is mostly explained by two alternative theories: (1) syn-sedimentary sequestration of metals under oxygen-deficient conditions from marine water, e.g., [49,50] or (2) flushing the sediments by metal-enriched syngenetic brines or contemporaneous exhalation of such brines into marine basin, e.g., references [51][52][53]; however, these theories are challenged by works that underline the influence of source rocks and particulate precursor material on the final character of metal enrichment in black shales, or the crucial role of diagenetic redistribution processes induced by late diagenetic brines. Johnson et al. [54] proposed, based on complex stable isotope and pyrite geochemical data, that the preeminent source of metals in black shales globally could be terrigenous rocks after all, but their hypothesis brings in the possibility that very high oxygenation of the atmosphere promoted substantial oxidation of terrigenous material that in turn released metals into oceans. If true, these re-occurring events must have taken place together with widespread, but timely constrained (global) marine anoxia, which allowed the fixation of (redox-sensitive) metals into black shale sediments. In general, U-Mo-V-Pb-(Co) enriched trace metal association with sporadically elevated concentrations of some other trace elements were detected in GA from Saka (east) and Pakri (west) sections (Table 2) and in regional EW and NS profiles ( Figure 12). The U-enriched bottom part of GA can be well followed in the western part of the country, whereas in the east, where the most important phosphorite deposits are situated, its distribution becomes more heterogeneous. Most of the highly enriched trace metals in GA as well as other abundant trace elements, like As, Sb, Ni, Cu and Re, belong to the group of the redox-sensitive and/or stable sulfide-forming metals and might undergo considerable partitioning in marine geochemical and biogeochemical cycles. As indicated by the studies of trace elements in modern marine environments e.g., [55][56][57], the redox-sensitive elements mostly occur as soluble species under oxidizing conditions. Under the oxygen-depleted conditions, however, the redox-sensitive elements typically are present via insoluble species (metal-organic complexes, sulfides, metal oxyhydrates, etc.) and thus tend to sequestrate into sediments. The whole metal trapping process is strongly linked with organic matter breakdown and sulfate reduction processes, which inhibit the crystallization of sulfides. Based on comparative studies of trace element accumulation in modern and ancient organic-rich sediments (e.g., [23,58,59]) it has been suggested that oxygen availability in the sedimentary environment could have had major control over the development of enriched trace metal associations in different black shales. In general, U-Mo-V-Pb-(Co) enriched trace metal association with sporadically elevated concentrations of some other trace elements were detected in GA from Saka (east) and Pakri (west) sections (Table 2) and in regional EW and NS profiles ( Figure 12). The U-enriched bottom part of GA can be well followed in the western part of the country, whereas in the east, where the most important phosphorite deposits are situated, its distribution becomes more heterogeneous. Most of the highly enriched trace metals in GA as well as other abundant trace elements, like As, Sb, Ni, Cu and Re, belong to the group of the redox-sensitive and/or stable sulfide-forming metals and might undergo considerable partitioning in marine geochemical and biogeochemical cycles. As indicated by the studies of trace elements in modern marine environments e.g., [55][56][57], the redoxsensitive elements mostly occur as soluble species under oxidizing conditions. Under the oxygendepleted conditions, however, the redox-sensitive elements typically are present via insoluble species (metal-organic complexes, sulfides, metal oxyhydrates, etc.) and thus tend to sequestrate into In general, the dominance of common marine redox-sensitive elements among enriched metals in GA favors syngenetic enrichment as the major process of trace metal sequestration. On the other hand, the remarkably high concentration of enriched elements in GA and the lack of clear spatial covariance patterns imply that element sequestration solely from seawater due to Eh gradients is likely an insufficient model for explaining the observed large-scale trace metal heterogeneity in GA [23]. The common distinctive feature of sections is the occurrence of the highest concentrations of enriched elements in the lower half of the complex; however, the considered trace metals do not show linear covariance patterns and the maximum (and minimum) enrichment intervals of different enriched components mostly do not overlap. In the case of the Pakri GA sequence, one can separate about 1.3 m thick lower part, which is enriched with some trace metals like Mo, U and Sb, and also contains more organic matter as indicated by higher LOI 500 • C values. While Mo gradually decreases towards the upper part of the Pakri sequence, U and V contents are somewhat more erratic. Box plots as well as distribution histograms in Figure 13 clearly show that U, V, Mo and TOC are always more concentrated in the lower part of the north-western Estonian GA bed. The thinner GA complex from Saka, which on average contains more Mo, U and V than the Pakri GA, is also characterized by the larger variance of those elements. In Saka samples, no clear vertical distribution trends of Mo and U can be followed, the concentrations fluctuate in a large scale and very high values alternate with low ones. These results agree with the observation that V, U and Mo content in black shales typically correlates with the abundance of organic matter (Figure 13), likely indicating early fixation via metal-organic complexes. Previous studies have suggested that~30% of the uranium in GA is associated with organic matter, 30% with phosphate, another~30% with clay phases and about 6% with pyrite [60]; however, recent microanalyses have revealed a mainly bimodal distribution of U only between organic matrix and phosphate phases, while adsorption onto organic matter is assumed the primary occurrence form [61]. Bimodal character of U occurrence is also reflected in bulk geochemistry, whereas the association of U with TOC is stronger in the upper GA bed and association with P, on the other hand, is stronger in the lower bed ( Figure 13). The correlation coefficient is somewhat biased from the occasional distinctly high P concentrations, but the simultaneously occurring highest U values (>200 ppm U) are again speaking in the favor of phosphate-associated U in the lower GA bed. Generally, the relationship of U is the highest with Mo (U-Mo biplot in Figure 13), quantified by a correlation coefficient of 0.83 in the upper GA bed. Minerals 2020, 10, x FOR PEER REVIEW 19 of 25 Figure 13. Associations of U in black shale with other redox-sensitive elements, P and total organic carbon (TOC). The division is made between the upper and lower bed. Based on core material originating from NW Estonia, no of observations 239 [38]. Bimodal character of U occurrence is also reflected in bulk geochemistry, whereas the association of U with TOC is stronger in the upper GA bed and association with P, on the other hand, is stronger in the lower bed ( Figure 13). The correlation coefficient is somewhat biased from the occasional distinctly high P concentrations, but the simultaneously occurring highest U values (>200 ppm U) are again speaking in the favor of phosphate-associated U in the lower GA bed. Generally, the relationship of U is the highest with Mo (U-Mo biplot in Figure 13), quantified by a correlation coefficient of 0.83 in the upper GA bed. Complex research on GA geochemistry and pyrite trace elements by laser ablation ICP-MS in Pakri outcrop (western Estonia) revealed that Th is mostly associated with pyrite and again, is most abundant in pyrite of the lower part of GA, averaging at about 200 ppm Th. Simultaneously, the variance of Th concentrations is the highest in the lowest section. Thorium is thus clearly enriched in pyrite compared to bulk rock [54,62]. Since pyrite formation seems to be mainly associated with bacterial sulfate reduction (negative δ 34 S), Th could also be captured from the seawater during pyrite formation; but again, the high variance of this metal concentration remains unanswered. The organic-rich black shale, from which Estonian graptolite argillite is just a part of, is covering a large area of Baltoscandia and Fennoscandia ( Figure 14). It was formed in proximal settings of Baltic Palaeobasin in Early Ordovician in times when Baltica was located around 40-50 degrees at southern latitudes [63,64]. On the regional scale, these black shales form a patchy, but vast Middle Cambrian-Lower Ordovician black shale belt in the Baltoscandian region stretching east-west from Lake Onega to the Jutland peninsula. Originally it has accumulated in a large exceptionally flat-floored epeiric sea, which has now been tectonically disturbed, especially in the northern sections ( Figure 14). Even the black shale in Sweden (known as Alum Shale) has historically been used mostly for uranium production, there are rising commercial interests in other metals. As the resource is vast in tonnage and geographical scale, the black shales should be considered as a possible future resource for several Figure 13. Associations of U in black shale with other redox-sensitive elements, P and total organic carbon (TOC). The division is made between the upper and lower bed. Based on core material originating from NW Estonia, no of observations 239 [38]. Complex research on GA geochemistry and pyrite trace elements by laser ablation ICP-MS in Pakri outcrop (western Estonia) revealed that Th is mostly associated with pyrite and again, is most abundant in pyrite of the lower part of GA, averaging at about 200 ppm Th. Simultaneously, the variance of Th concentrations is the highest in the lowest section. Thorium is thus clearly enriched in pyrite compared to bulk rock [54,62]. Since pyrite formation seems to be mainly associated with bacterial sulfate reduction (negative δ 34 S), Th could also be captured from the seawater during pyrite formation; but again, the high variance of this metal concentration remains unanswered. The organic-rich black shale, from which Estonian graptolite argillite is just a part of, is covering a large area of Baltoscandia and Fennoscandia ( Figure 14). It was formed in proximal settings of Baltic Palaeobasin in Early Ordovician in times when Baltica was located around 40-50 degrees at southern latitudes [63,64]. On the regional scale, these black shales form a patchy, but vast Middle Cambrian-Lower Ordovician black shale belt in the Baltoscandian region stretching east-west from Lake Onega to the Jutland peninsula. Originally it has accumulated in a large exceptionally flat-floored epeiric sea, which has now been tectonically disturbed, especially in the northern sections ( Figure 14). Even the black shale in Sweden (known as Alum Shale) has historically been used mostly for uranium production, there are rising commercial interests in other metals. As the resource is vast in tonnage and geographical scale, the black shales should be considered as a possible future resource for several metals, apart from uranium. Thus, the Estonian graptolite argillite together with the Swedish Alum Shale may be the largest European reserve for uranium. Metals in Phosphorite The Estonian phosphorite is a European-scale phosphorus reserve. At the same time, high REE enrichment makes it attractive for REE production, which in turn will put a focus on a number of environmentally hazardous elements, including uranium and thorium. Even if Estonian sedimentary phosphorite has low concentrations of U and Th, its processing might concentrate radionuclides into main or by-products depending on the chosen technological flowsheet, and therefore might result in the generation of Technologically Enhanced Naturally Occurring Radioactive Materials (TENORM). It would be beneficial to consider the extraction of U along with phosphate products as this would leave behind less environmentally hazardous by-products. It is likely, if the decision is made to utilize the Estonian phosphate ore, uranium and thorium cannot be dumped into the waste piles, instead they would be extracted as co-products. Metals in Phosphorite The Estonian phosphorite is a European-scale phosphorus reserve. At the same time, high REE enrichment makes it attractive for REE production, which in turn will put a focus on a number of environmentally hazardous elements, including uranium and thorium. Even if Estonian sedimentary phosphorite has low concentrations of U and Th, its processing might concentrate radionuclides into main or by-products depending on the chosen technological flowsheet, and therefore might result in the generation of Technologically Enhanced Naturally Occurring Radioactive Materials (TENORM). It would be beneficial to consider the extraction of U along with phosphate products as this would leave behind less environmentally hazardous by-products. It is likely, if the decision is made to utilize the Estonian phosphate ore, uranium and thorium cannot be dumped into the waste piles, instead they would be extracted as co-products. The presently determined variability of REE and U in the Estonian phosphorites cannot easily be explained. As the recent lingulates are commonly not rich in trace elements, including uranium and REEs (below or a few ppm), later processes will be a likely cause for the metal enrichment in the phosphorites. According to the current understanding, the high and very heterogeneous concentrations of metals are the result of post-depositional diagenetic processes; however, geological processes and other conditions (pH, Eh, etc.) need further detailed studies. Although the phosphate raw ore beneficiation (separating shells from sandstone) and phosphorous extraction are technologically feasible, the technology for REE and uranium extraction in parallel with the phosphorous acid production needs further technological developments. From the perspective of creating marketable by-products from phosphorite trace constituents, the northern part of the Toolse deposit seems to be attractive due to relatively higher contents of U as well as total REEs compared to the Rakvere deposit; however, the total resources of U and REEs seem to be greater in the Rakvere deposit and its individual blocks as phosphorite bed is thicker in there. Relying on the vast phosphorite reserves in Estonia, the critical nature of both the phosphorus and REEs for the European economy and security, it may be a worthwhile opportunity to develop these resources into production at the European scale. It is important to note that due to the neighboring geological positions in cross-section of the black shale and phosphorite, the future mining activity can be complex. There are no easy extraction technologies that can be applied on only one of those resources separately. Both the graptolite argillite and phosphorite need to be treated as a complex multi-resource. Conclusions Regarding uranium and thorium, three possible rock types/resources can be in focus in Estonia: Ordovician black shales, Cambrian-Ordovician phosphorites and two sites with secondary resources (tailings). Due to the neighboring geological positions the black shale and phosphorite can be treated as a complex multi-resource, possibly at the continental scale. Because of mining technologies and environmental concerns, these resources can be most likely mined simultaneously. Estonian phosphate resources are estimated to be more than 800 million tons of P 2 O 5 or 3 billion metric tons as phosphate ore. The phosphorite contains in large quantities rare earth elements, in smaller quantities uranium and thorium. Perspective uranium resources associated with phosphorite are calculated to be in between 147,000 to 175,000 tons. Considering future possible mining operations with an annual production of 5 Mt tons of ore-about 120 tons of uranium and 27 tons of thorium could be extracted as a co-product together with 720 to 900 tons of total REEs. The mining waste dump at the Maardu town contains at least 3650 tons of U and 730 tons of Th. The Sillamäe radioactive waste depository contains about 1200 tons of U and 800 tons of Th. Funding: This research is partly funded by the Estonian Research Council grant RESTA20 to A.S. Conflicts of Interest: The authors declare no conflict of interest. Table A1. Phosphorite (P 2 O 5 in Mt) and associated element resources in different Estonian deposits. Adapted from reference [4].
v3-fos-license
2020-06-11T09:07:23.151Z
2019-12-01T00:00:00.000
226798342
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/BF03546071.pdf", "pdf_hash": "02fefa2cc71d006aa9523913ad54df233c8551c4", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41779", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "sha1": "df99f57ba5322203f265f09baabb358f307b1328", "year": 2019 }
pes2o/s2orc
GenTag: a package to i m prove ani m al color tagging protocol The individual identification of animals by means of tagging is a common methodological approach in ornithology. However, several studies suggest that specific colors may affect animal behavior and disrupt sexual selection processes. Thus, methods to choose color tagging combinations should be carefully evaluated. However, reporting of this information is usually neglected. Here, we introduce the GenTag, an R package developed to support biologists in creating color tag sequence combinations using a random process. First, a single-color tag sequence is created from an algorithm selected by the user, followed by verification of the combination. We provide three methods to produce color tag sequences. GenTag provides accessible and simple methods to generate color tag sequences. The use of a random process to define the color tags to be applied to each animal is the best way to deal with the influence of tag color upon behavior and life history parameters. INTRODUCTION The individual marking of animals in natural populations is a widespread methodological approach for field ecologists and provides the foundation for several methods to determine population size, lifespan, animal movements in the landscape, and migration patterns, among other possibilities (Sutherland 2006, McCrea & Morgan 2014. For birds, the most popular marking method is the application of color rings (Calvo & Furness 1992), where individuals receive unique combinations of color bands that allow the researcher to individually identify animals by recapture or observation at a distance. Since the early 1980's, several studies have reported the influence of color tags on bird social behavior , Burley 1986). Because of methodological limitations, most of these studies were carried out with captive populations (e.g., Burley 1986, Jennions 1998. Few studies took place in the field, and these suggest that some patterns of color tagging have an influence on social behavior (e.g., Zann 1994, Johnsen et al. 1997& 2000. It appears that tag colors between 600-700 nm, in the warm range of the spectrum (e.g., yellow, orange and red), influence behavioral contexts associated with conspecific preferences , reproductive investment (Zann 1994, Gil et al. 1999, offspring sex ratio (Burley 1986), dominance behavior (Cuthill et al. 1997), mate-guarding (Johnsen et al. 1997), and levels of cuckoldry (Johnsen et al. 2000). Despite the negative effects of tags on animal behavior and survival (Calvo & Furness 1992), and the fact that 40% of research projects use color rings to identify birds, 98% of publications do not mention the possibility of potential injury and reduced survival due to tags (Alisauskas & Lindberg 2002). The potential for injury differs between taxonomic groups (Sedgwick & Klus 1997, Pierce et al. 2007, Nietmann & Ha 2018 and animal body sizes (Griesser et al. 2012). Notwithstanding these issues, colored tags are still very popular, mainly due to their lower cost when compared with other identification methodologies such as radio trackers or PIT-tags (Schlicht & Kempenaers 2018). On the other hand, there is no evidence of changes in predation rate due to tags (Cresswell et al. 2007), and even leg flags do not substantially increase predation (Weiser et al. 2018). Given the above overview, we emphasize that the methodology used to choose color tagging combinations should be carefully evaluated during project development and subsequently reported in the methods section of publications. However, this information is rarely reported. When developing their tagging methodology, investigators may unconsciously select more conspicuous colors to tag animals as these are more likely to result in fast identification. To avoid biases in their choices, field ecologists should necessarily adopt a randomized strategy to determine tag colors and their combinations. We suggest that the best way to deal with the possibility of tag color influence is to generate a list of color tag combinations before tag application, and to follow the list regardless of individual characteristics of the animal (e.g., body size, percentage of feather coverage, color, etc.). PACKAGE DESCRIPTION The genseq function in the GenTag package is the main function to create a list of color tag sequences (functions are summarized in Table 1). First, a single sequence is generated by an algorithm, followed by confirmation of its uniqueness. Previously used sequences can also be used for the uniqueness test. Users can request sequences with specific tags, such as metal or flag bands for numbered tagging. This function was designed to sort out sequences using equal numbers of tags. If the user wants to create sequences with different numbers of tags, it is necessary to use "EMPTY" as a proxy for a special color for the nontag, and then change the parameter emptyused to TRUE (see genseq help for more information of application). In this scenario, genseq will take into account synonyms of combinations with the "EMPTY" code, for example: "EMPTY"-"Red"-"Blue" is synonymous for: "Red"-"EMPTY"-"Blue" and "Red"-"Blue"-"EMPTY". Although there is no evidence of conspecific preferences based on number of tags (Jennions 1998), we recommend avoiding applying different numbers of tags to individuals in the same population, as this may generate confusion in identification. Some animals may actively remove tags, and some colors appear to have a higher rejection rate than others (Kosinski 2004), leading researchers to misidentify individuals in the field. Sequences are created by a replaceable algorithm that selects among tag colors. Here we provide three algorithms: "All equal", creates combinations of tags in which all colors have the same probability of being sampled; "Variable frequency", creates combinations of tags using different probabilities for each color, where the probabilities are defined by the user; and the "Life expectancy" algorithm creates a restriction based upon color combinations, so that all colors will be represented in similar frequencies in the natural population under study. The latter algorithm requires information of all previously used combinations and dates of applied tags. Additionally, this algorithm can be improved by providing an estimation of survival probability and lifespan. The routine first estimates the quantity of remaining color tags in the natural population. The estimates of survival probability and lifespan provided to the algorithm removes the number of tags that are lost through individual mortality. The sample ratio for each color is then determined by Equation 1. The speed parameter can range from 0 to 1, and can be used to relax the restriction in the sampling procedure. When speed is set at 1 (default) the color that was previously used most extensively will not be sampled in any combination, and other colors occurring in a large number of combinations will only rarely be sampled. Intermediate values allow the occurrence of combinations with commonly used colors, but with a degree of restriction. Alternatively, when speed is set at 0, no adjustments will be made. The user can also select colors that will be ignored in sample adjustment (see lifexp help for more details). r = ratio for sampling a given color. c = estimated number of remaining tags of the given color. m = estimated number of remaining tags of the most used color. s = speed. Our algorithms were developed for three different situations, where researchers: i) require a non-biased color tag generator, where all colors will be equally represented in the produced combinations; ii) have a restriction in the proportion of each color availability, a common occurrence, for example, when researchers receive donations of color tags; and iii) have ongoing studies and realize a possible bias in color tag effect, and need to implement a quick adjustment so all colors are equally represented in the natural study population. RECOMMENDATIONS For new studies we recommend using the "All equal" algorithm, because it will ensure that all colors are equally represented in the study population. For ongoing studies, we recommend both "All equal" and "Life expectancy" methods. Across a long time period, "All equal" will adjust tag color representation as animals die. For a fast adjustment, "Life expectancy" is more appropriate since it changes the sample probabilities based upon differences in previously used color frequencies. Both methods do not assume limitation in color tag availability. For any situation with limitations of color tag availability, we recommend the use of "Variable frequency" algorithms, to take advantage of the maximum number of combinations using current tag availability. The use of different numbers of tags is an option to save tags, since it increases the number of possible combinations while using fewer tags. We recommend avoiding this procedure, because it may result in misidentifications if an individual loses or removes a tag (Kosinski 2004). Furthermore, by using the same number of tags for all individuals, tag weight will be equivalent for all animals, despite a possible effect of color. In Appendix I we provide a tutorial of how to generate the list of color tag combinations for both new and ongoing studies. We exemplify how to apply the three methods to generate color tag sequences. The GenTag package provides accessible and simple methods for ecologists and field researchers to generate color tag sequences. The use of a random process to define the color tags to be applied to each animal is the best way to deal with the influence of tag color upon behavior and life history parameters in general. We highlight that the method used to choose color tagging combinations should be carefully evaluated and reported in the methodology section of publications. The GenTag package provides a straightforward and flexible way to deal with tagging effects on natural populations under study. GenTag is written in the R programming language (version 3.5.0) and can be run on Windows, Mac OS X, and Linux systems. There are no package dependencies in the current stable version (version 1.0). It can be installed from CRAN (https://cran.r-project.org/web/ packages/GenTag/), and a development version can be found on GitHub (https://github.com/biagolini/ GenTag). INTRODUCTION This tutorial illustrates how to use the GenTag package to improve bird color tagging protocols. We provide examples and advice based on our experience with bird field surveys. The theoretical background of the available methods presented in the main paper must be consulted before following this tutorial. This tutorial was written for R beginners; however, it demands a minimum knowledge of how R works (user must know what is an object, working directory, and how to apply functions). Choose parameters to generate sequences The first step is to determine three fundamental parameters: i) number of tags that each bird will receive; ii) colors to be used; iii) which algorithm will be used. The first two parameters are fundamental to determine the number of possible combinations that can be created for color tagging. The maximum number of color tag combinations is given by the formula: Mcomb = Maximum number of unique color tags combinations Ncolors = Number of available colors Tag = Number of tags used for each animal Thus, it is clear that each new possible color tag has a significant impact on the number of possible combinations. Therefore, to achieve a large number of possible combinations, the researcher should use as many colors as possible. The definition of which color will be used, depends on some factors. First, similar colors, such as white and light blue, should be avoided because natural conditions (i.e. sunlight, dust) can result in tags with similar colors becoming impossible to tell apart during focal observations, even with binoculars. Conspicuous bands make visual identification easy, however they can impact social behavior and the probability of the bird being detected by a predator. The use of band colors similar to bird plumage or leg tissue, reduces this impact. The number of tags used on each animal also has a large impact on the number of possible combinations. However, it should be kept to a minimum, because color tags generally have a negative effect on birds. There is no rule of thumb concerning the number of tags to be used on each animal. Decisions are based upon the number of tags needed to cover the expected sample size, but take into account effects on bird behavior and survival. For instance, too many tags can be detrimental for flight, color rings can catch on vegetation leading to the bird's death (see , and colors of tags may disrupt social behavior (e.g. We suggest that a good starting point is to use four tags per animal, two on each leg. For instance, using four tags with seven available colors to produce different combinations will yield over 16,000 unique combinations. In terms of the best algorithm for color sampling, we recommend use of the "All equal" method. This is a method designed to produce non-biased color sequences, where all colors will be equally represented in the combinations generated. It is recommended for both new and ongoing studies, because over the long term, the method ensures that all colors are equally represented in the study population. However, if new color tags are introduced in the system, researchers must consider the use of the "Life expectancy" method. Finally, to take advantage of the maximum number of combinations with a restricted number of available tags we recommend the use of the "Variable frequency" method. This latter method is useful in situations where color tags were donated, a common occurrence in laboratories with several ongoing field surveys. We describe each sample routine method in the main paper. Generate color tag sequences In this section, we show how to apply the main functions of and provide all sequences of codes necessary to use GenTag. For a minimum and necessary acquaintance with R, we recommend Crawley (2012). Make sure commands are typed exactly as illustrated, as they are case sensitive. The first step is to install and load GenTag. You can follow this tutorial by typing the following commands at the R prompt. install.packages("GenTag") library("GenTag") You must create an object to hold the name/code of available colors. Make sure that all color names are types exactly the same in your database. For instance, if you typed "Green", "green", and "GREEN", R will recognize these as different colors/codes. tcol<-c("Black","Blue","Brown","Gray","Green","Pink","Purple","Red","White","Yellow") At this point you can create your first color tag sequences list. For this first example, we will use the "All equal" algorithm. Use the function genseq to create combinations, the argument ncombinations will determine the number of combinations to be produced, ntag is the argument for the number of tags used in each animal, and the colorsname argument is to determine the available colors to be sampled (i.e., the object created in the last step). genseq (ncombinations=30, ntag=4,colorsname=tcol) If you have any difficulties in applying a function, access the help documentation by using the help command. help(genseq) # or just type ?genseq Note that in our example, we do not inform the algorithm used to generate color sequences. In this situation, Genseq will automatically use the default "All equal" algorithm. If a different algorithm is desired, it must be informed in the argument gen_method, as will be shown below. Another important point is to notice that in this example, previous used combinations were not taken into account in the uniqueness test. Thus, using the above example, previously used combinations can be generated again, leading to duplicates in your database. There are several ways to import data into R, as shown in Fig. 1. In this tutorial we use simulated data of previously used combinations provided within the GenTag package. data(pre_used) # Load data example In the example, data are stored in an object named pre_used, a type of data frame. Information in a data frame can be accessed in various ways. To see what is contained in the pre_used object, type the following code to check the first elements of your data frame: head(pre_used) You can see that this data frame contains 5 columns, the first 4 are colors used in sequence (the order is: upper left, bottom left, upper right, bottom right), the last column is the year when each combination was used. You can use this to assess previously used sequences. Set the argument usedcombinations to the object (data frame or matrix) that contains color tag records (columns 1 through 4 in the example). genseq (ncombinations=30, ntag=4, colorsname= tcol, usedcombinations=pre_used[,1:4]) To create sequences that contain special codes, such as metal for numbered tagging: set the argument nspecial to the number of special codes, and the argument name1 and location1 to inform the tag codes and where each special tag can be placed. In the following example, one metal tag will be used for all birds, in positions 2 or 4 (left or right bottom). genseq(ncombinations=30, ntag=4, colorsname= tcol, nspecial=1, name1="Metal", location1=c(2,4)) Special codes can also be used to create combinations with different numbers of tags. In this situation, a special "color" named as "EMPTY" can be a proxy for non-tags. Two problems arise with using different numbers of tags: i) misidentification of individuals in the field, since some animals can actively remove tags; ii) several synonyms of combinations, for example, by using 2 tags in each leg "EMPTY"-"Green"-"Red"-"Blue" is synonymous with: "Green"-"EMPTY"-"Red"-"Blue". To adjust the test of uniqueness for codes with "EMPTY" data, set the argument emptyused to TRUE, inform which code is the proxy of non-tag at argument emptyname, and define which tags are in the same group (e.g., applied on the same leg) by arguments g1,g2,…g6 (in the example g1 represents left leg and g2 represents right leg). genseq (ncombinations=30, ntag=4, colorsname= tcol, usedcombinations=pre_used[,1:4], emptyused = TRUE, emptyname = "EMPTY", g1 = c(1,2), g2 = c(3,4)) Until now, the combinations were just displayed on R console. To export combinations, you can address combinations to an object, and then export this object as a .txt or .csv file. setwd(choose.dir())# Choose a working directory to save your data combinations <-genseq(100, 4, tcol) # Save a set of sequences in an object # Export the object to csv file write.csv(combinations, file="Color_sequences.csv", row.names=F) # Export the object to txt file write.table(combinations, file = "Color_sequences.txt", sep = "\t", row.names = F) The tools presented above provide the versatility to adjust combinations to fit any particular study. All specifications are equally used for all sample algorithms. As mentioned before, to change the sample method you must use the argument gen_method. The "Variable frequency" creates combinations of tags using different probabilities to sample each color. Thus, to apply this method it is also necessary to inform a proportion of each available color. You can set the sample ratio by an object with ratios present in the same sequence as the color name, tcol object in our example. # Create an object to hold the ratio for sampling p<-c(1,2,5,1,2,2,4,5,8,5) # Generate sequences by Variable frequency algorithm genseq(ncombinations=30, ntag=4, colorsname=tcol, gen_method="vfrequency", colorsf=p) A good practice for those that decide to use this method is to create a spreadsheet with two columns, where the first column contains the name of the colors and the second contains the number of available tags. Next, import the table (as shown in Fig. 1), and use the first column to address color name (in colorsname argument), and the second column as a reference for the sampling ratio (in usedcombinations argument). For a quick adjustment in color representation, we recommend the use of the "Life expectancy" method. This algorithm creates a restriction based upon color combinations. The sample ratio for each color is adjusted based upon an estimate of how many color tags still exist in nature. This method allows a proportional adjustment of colors in the population faster than the "All equal" method. To apply this method it is necessary to inform when each combination was used (yearusedcombinations argument). To improve accuracy, you can provide an estimation of yearly survival rate (yearsurvival argument), and lifespan (lifespan argument), which will provide an estimate on the remaining color tags present in nature based on ringing date. If yearsurvival and lifespan are undefined, it will be assumed that animals never die, and that the proportion that occurs in the natural population equals the total number of tags used. In a long-term survey, it is reasonable to not take into account old tag records. # Generate sequences by Life expectancy algorithm genseq(ncombinations=30, ntag=4, gen_method="lifexp", colorsname= tcol, usedcombinations=pre_used[,1:4], yearusedcombinations=pre_used[,5], yearsurvival= 0.8, lifespan=5, currentyear=2019) Figure 1. General overview of how to import pre-used sequences into R. There are several ways to import data into R, this is just one approach. A) Use a spreadsheet software (e.g. Microsoft Excel, LibreOffice Calc, Apple Numbers) to type your pre-used combinations. In the example, the first row is the header, and five columns are used to present information of color tags. Columns 1, 2, 3 and 4 denotes positions upper left, bottom left, upper right, and bottom right, respectively; the last column denotes the year/breeding season when the bird was color tagged. B) Export your spreadsheet as a ".txt" file. C) Import your pre-used records and store in an object, by typing the following command at the R prompt: pre_used<-read.table(choose.files(), header = TRUE)
v3-fos-license
2017-09-06T08:33:57.045Z
2011-11-01T00:00:00.000
24878644
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academic.oup.com/rheumatology/article-pdf/50/11/2051/5051849/ker256.pdf", "pdf_hash": "1b786a3cd12a0a0468f993ccb26aa6114178043e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41780", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "1f5089f6964fd6a101c201d675f255f3672604a7", "year": 2011 }
pes2o/s2orc
Sleep and fatigue and the relationship to pain, disease activity and quality of life in juvenile idiopathic arthritis and juvenile dermatomyositis Objectives. To determine and compare the prevalence of disturbed sleep in JIA and JDM and the relationship of sleep disturbance to pain, function, disease activity and medications. Methods. One hundred fifty-five patients (115 JIA, 40 JDM) were randomly sampled and were mailed questionnaires. Sleep disturbance was assessed by the sleep self-report (SSR) and the children’s sleep habits questionnaire (CSHQ). Fatigue, pain and function were assessed by the paediatric quality of life inventory (PedsQL) and disease activity by visual analogue scales (VASs). Joint counts were self-reported. Results. Eighty-one per cent responded, of whom 44% reported disturbed sleep (CSHQ > 41); there were no differences between disease groups. Poor reported sleep (SSR) was highly correlated with PedsQL fatigue ( r = 0.56, P < 0.0001). Fatigue was highly negatively correlated with quality of life ( r = (cid:2) 0.77, P < 0.0001). The worst pain intensity in the last week was correlated to sleep disturbance ( r = 0.32, P = 0.0005). Fatigue was associated with prednisone and DMARD use. Conclusions. Sleep disturbance and fatigue are prevalent among children with different rheumatic diseases. Sleep disturbance and fatigue are strongly associated with increased pain and decreased quality of life. Strategies aimed at improving sleep and reducing fatigue should be studied as possible ways of improving quality of life for children with rheumatic illness. Introduction JIA is one of the most common rheumatic diseases in childhood, affecting at least 1 in 1000 children [1]. Findings suggest that fatigue is common [2] and that sleep is disrupted in children with JIA [3,4]. Sleep in adequate amount and quality is essential for normal child development. Sleep disturbances collectively refer to impairments in the ability to initiate or maintain sleep, and can be measured by parent or child self-report and by objective measures such as actigraphy and polysomnography [5]. Sleep disorders may affect a child's daytime function, resulting in behavioural problems such as attention deficit, aggressiveness, hyperactivity, chronic fatigue, decrements in daytime alertness and performance, and an increase in school absenteeism [6,7]. Sleep disturbances have also been associated with children's quality of life-negatively impacting children's physical and emotional well-being [8,9]. Sleep disorders are common in adult patients with musculoskeletal diseases including RA, FM and OA [1012]. In patients with RA there is evidence for severe sleep fragmentation with frequent awaking and arousal. The behavioural manifestations of such sleep disturbances in adults include excessive daytime sleepiness and fatigue, coupled with decreases in mood and performance [13]. Although it is generally assumed that joint pain may induce sleep abnormalities, the cause of frequent awaking and arousals in patients with RA is controversial. Little is known about the quality of sleep in children with rheumatic diseases. Studies suggest that sleep is disrupted in children with JIA [3,4]. Children with JIA and their parents report significantly more instances of night awaking, parasomnia, sleep anxiety, sleep-disordered breathing, early morning awaking and day-time sleepiness than do healthy children [14]. The cause of sleep disturbance in patients with JIA has yet to be elucidated; Bloom [14] and Lewin and Dahl [15] hypothesize a bi-directional interplay between pain and sleep disturbance. Currently there are no data about sleep disturbance in children with JDM. The aims of this study were to describe and compare sleep disturbance in the most common subtypes of JIA and in JDM, to explore possible associations of sleep disturbance with fatigue, disease activity, pain and health-related quality of life (HRQL), and to measure the influence of different medications on sleep. Patients and methods We used a cross-sectional, mailed survey design. The research was approved by The Hospital for Sick Children Research Ethics Board (approval number 1000010691). Study population A random, representative sample, balanced for disease subtype, was drawn from the population of all patients diagnosed with JIA (by the ILAR criteria) [16] currently followed at The Hospital for Sick Children. For feasibility reasons, we limited the onset subtypes of JIA that we sampled to oligoarticular, polyarticular (RF negative) and systemic. We used a computer-generated list of random numbers in order to draw the sample. Eligible children were between the ages of 8 and 16 years so that they were able to answer the study questionnaires. One hundred and fifteen patients with JIA (oligoarticular, n = 40; polyarticular, n = 40; and systemic, n = 35) were randomly selected. Forty patients with probable or definite JDM [17,18] between the ages of 8 and 16 years were randomly selected using the same strategy. Patients with concomitant chronic inflammatory diseases (e.g. IBD, active atopic dermatitis, etc.) were excluded. Procedure In order to achieve a high response rate we used components of the Tailored Design Method [19] that comprises several contacts with the families. A pre-notice letter was sent a week before the questionnaire mail out. One week later a questionnaire with a cover letter was sent; 2 weeks after the questionnaire a thank you/reminder postcard was sent. For those who did not respond after 2 weeks, a fourth reminder letter was sent with a replacement questionnaire; finally, 2 weeks later telephone contact was made with those who had not yet answered the survey. Questionnaires We developed a questionnaire that captured demographic and disease-related information and comorbidities, e.g. age, disease type, medication (including dose and frequency) as well as questions regarding family history of sleep disturbance and other conditions affecting the patient associated with sleep disturbance (i.e. attention deficit hyperactivity disorder, FM and psychiatric illness). Evaluation of sleep and fatigue Sleep was assessed by two questionnaires, the parent-reported children's sleep habits questionnaire (CSHQ) [20] and the child-reported sleep self-report (SSR) [21]. The CSHQ is a retrospective, 45-item parent questionnaire that has been used in a number of studies to examine sleep behaviour in children and has established validity and reliability. Thirty-five items on the CSHQ are grouped into eight subscales related to a number of key sleep domains: . bedtime resistance (six items); . sleep onset delay (one item); . sleep duration (three items); . sleep anxiety (four items); . night awakening (three items); . parasomnia (seven items); . sleep-disordered breathing (three items); and . day-time sleepiness (eight items). The total score consists of 33 items, rather than 35, because two of the items on the bedtime resistance and sleep anxiety subscales are the same. Parents are asked to recall sleep behaviours occurring over a typical recent week. Items are rated on a 3-point scale for frequency of the sleep behaviour: usually = 57 times/week; sometimes = 24 times/week; and rarely = 01 times/week. A cutoff total CSHQ score of 41-as has been previously suggested-was chosen as the cutoff to define a patient as having sleep disturbance. The SSR is a 26-item, 1-week retrospective survey designed to be administered to or self-administered by elementary school-aged children (generally ages 712 years). The SSR was designed to assess sleep domains like those of the CSHQ, and, in its development, items were selected to be approximately similar to items on the CSHQ. Though designed to parallel the CSHQ, the SSR addresses domains with fewer and less-complex questions in order to be understandable to children. Items are rated on the same 3-point scale as the CSHQ, with higher scores indicating more disturbed sleep. Some items are reverse scored so that a higher score in any given item consistently indicates more disturbed sleep or more problematic sleep behaviour. The SSR yields a total score only. Fatigue was assessed by the PedsQL multi-dimensional fatigue scale-parent and patient form. This is an 18-item questionnaire that was designed to measure child and parent perception of fatigue in paediatric patients and comprises the general fatigue scale (six items), sleep/ rest fatigue scale (six items) and cognitive fatigue scale (six items). This questionnaire was previously validated in paediatric patients, including those with rheumatologic disease [22,23]. Evaluation of disease activity and HRQL/functional status Disease activity was assessed with the following variables: (i) Parents' global assessment of overall disease activity on a 10-cm visual analogue scale (VAS). (ii) Number of swollen and painful joints by parents' and patients' self-report joint count-using a pictorial (mannequin) format. This method was previously validated in adult patients with RA and found to have a high correlation with physician assessment [24]. We modified the original mannequin format for use by paediatric patients; to the best of our knowledge, the mannequin has never been tested in this population before. Although a majority of JDM patients have arthritis during the course of their illness [25], the self-assessed joint count is a more important indicator of disease activity for the JIA subjects. (iii) PedsQL (pediatric quality of life inventory) core modules-parent and patient form. The PedsQL is a modular instrument designed to measure HRQL in children and adolescents aged 218 years; the generic core scales are multi-dimensional child selfreport and parent proxy-report scales developed as generic measures to be integrated with the PedsQL disease-specific modules [22]. Lower scores denote a poorer quality of life. (iv) PedsQL rheumatology module-parent and patient form. This is a 22-item questionnaire that was designed to measure paediatric rheumatology-specific HRQL. Both the PedsQL core and rheumatology modules have been tested in this population and have well-established validity and reliability [22,23,2635]. Evaluation of disease-related pain To assess pain, parents and patients completed the PedsQL paediatric pain questionnaire [34]. Present pain and worst pain intensity were assessed by a 10-cm VAS. In addition, four developmentally appropriate categories of pain descriptors were provided along with a body outline. The child was instructed to colour the four boxes underneath each descriptive category representing pain intensity and then to colour the body outline with the selected colour/intensity match. On the parents' form, pain was rated using numbers from 1 to 10 according to pain intensity, and parents were requested to place the numbers in the body outline. For simplicity, the body was divided into 34 areas (17 in the front and 17 in the back of the body) and only the number of painful areas was evaluated for our statistical calculations. Statistical analyses Continuous scores were described as means and medians as appropriate; categorical scores were described as frequencies. To compare groups, analysis of variance (ANOVA) was used with subsequent pairwise comparisons corrected for multiple comparisons using the Tukey honestly significant difference (HSD) test. Frequencies were compared by chi-square analysis. Correlations were performed using the Pearson's product-moment correlation. General linear modelling using standard regression diagnostics was used to look at predictive relationships. Due to possible confounding between medication use and disease activity, when medications were investigated as independent variables, models were constructed in which diagnosis, present pain, worst pain, number of painful areas, tender joint count, swollen joint count and global assessment were included as co-variates; final models were chosen using a backwards selection process. All analyses were performed using the R statistical language [version 2.7.2, Copyright (C) 2008, The R Foundation for Statistical Computing, ISBN 3-900051-07-0], and DataDesk 6.2.1 (Data Description, Inc., Ithaca, NY, USA). Results We had a high response rate; of 155 questionnaires that were mailed, we received 125 (80.6%). The demographic data are summarized in Table 1. The ages and sexes of the respondents did not differ between the groups. Sleep and fatigue All groups suffered from moderately severe fatigue, with no real differences between them; 44% reported sleep disturbance (CSHQ score 5 41; Table 2). There was no difference in sleep disturbance between the groups -sleep disturbance was as marked in the JDM group as it was in the different JIA subtypes. Disease activity Disease activity was mostly low to moderate and the number of active (swollen or painful) joints was low ( Table 3). As expected, the polyarticular-onset JIA group had a higher parent-reported painful joint count than the other groups. Pain and HRQL Pain was low to moderate in the studied subjects; 33% of the children reported no pain at the time of assessment (Table 4). Self-reported pain was somewhat higher in the polyarticular-onset JIA group. HRQL scores demonstrated moderate impairments in all groups; the polyarticular-onset JIA group appeared to be more affected than the others. Comorbidity and sleep Very few subjects had comorbid illnesses or a family history of sleep disorder (Table 1). Sleep scores as measured by the CSHQ and SSR did not differ significantly between www.rheumatology.oxfordjournals.org Sleep disturbance in JIA and JDM those who had any of the measured comorbid illnesses or a family history of sleep disorder and those who did not. Relationship between disease activity and sleep and fatigue For the group as a whole, there was no correlation between disease activity (as measured by the VAS) and the severity of the sleep disturbance (as measured by the CSHQ) (r = À 0.03, P = 0.8). The number of tender joints as reported by parents correlated modestly with the CSHQ (r = 0.27, P = 0.002); the relationship was similar when reported by children themselves. Parent-reported swollen joint count correlated less well with the CSHQ (r = 0.15, P = 0.09). Fatigue was worse (as measured by the parent-reported PedsQL fatigue scale) as disease activity increased (r = À0.21, P = 0.03) and as tender joint count (r = À0.35, P 4 0.0001) and swollen joint count (r = À0.23, P = 0.01) increased. Similar results were seen when patients reported for themselves. Relationship between pain and sleep and fatigue Pain and fatigue were correlated. For example, parent rating of worst pain was moderately correlated with fatigue as reported by the PedsQL (r = À0.51, P < 0.0001). Worst pain was modestly correlated with sleep disturbance as measured by the CSHQ (r = 0.23, P = 0.01). Likewise, patient-reported sleep disturbance (SSR) correlated moderately with pain and fatigue (number of painful areas r = 0.11, P = 0.22; worst pain r = 0.32, P = 0.0003; present pain r = 0.32, P = 0.0003; and PedsQL fatigue r = À0.45, P < 0.0001). The effect of sleep disturbance and fatigue on quality of life Disturbed sleep (parent and self-report) and increased fatigue were both strongly related to HRQL-both generic, as measured by the PedsQL core module, and specific, as measured by the PedsQL rheumatology module. The parent's PedsQL core module was negatively correlated with the CSHQ (r = À0.56, P 4 0.0001), and the child's PedsQL core module was negatively correlated with the SSR (r = À0.42, P 4 0.0001). The parent's PedsQL rheumatology module was negatively correlated with the CSHQ (r = À0.49, P 4 0.0001), and the child's PedsQL rheumatology module was negatively correlated with the SSR (r = À0.36, P 4 0.0001). Greater fatigue, as measured by the PedsQL fatigue scale, was highly correlated with worsened HRQL as measured by the PedsQL core module-both parent and child reported (r > 0.70, P 4 0.0001). The influence of medications on sleep and fatigue A number of subjects, in each of the diagnostic categories, were being treated with anti-rheumatic medications at the time of the study (Table 5). Additionally, two subjects were treated concomitantly with fluoxetine and one with risperidone; additional analyses with these medications were not done due to the small numbers. Subjects treated with NSAIDs had a lower CSHQ score than those not treated (mean 37.3 vs 41.5) when adjusted for present pain (F 1,113 = 6.9, P = 0.01). The CSHQ did not differ between those subjects taking DMARDs, biologics or prednisone (even when prednisone dose was considered) and those not taking these medications. The SSR was not different among those taking or not taking NSAIDS, DMARDs, biologics or prednisone when adjusted for diagnosis. Fatigue (as scored by the parents on the PedsQL fatigue scale) was worse in those subjects taking DMARDs-no matter what the diagnosis-when compared with those subjects not taking DMARDs (mean 70.5 vs 79.4, F 1,112 = 6.8, P = 0.01); however, this relationship was no longer statistically significant in models that included parent ratings of worst pain. The same findings were seen when child-rated fatigue was examined. Discussion We found that sleep is disturbed in almost half of our patients with both JIA and JDM, and that there are important relationships between disturbed sleep, fatigue, pain, disease activity and HRQL. From these cross-sectional data, we cannot determine whether disturbed sleep causes higher pain and poorer quality of life, or whether pain and disease activity lead to poor; however, we believe that it is likely a vicious cycle [15], and that attention to improving sleep may lead to reduced pain and is worthy of study. Healthy children also frequently suffer from sleep disturbance; our findings may not be specific for rheumatic disease. For example, 23% of healthy American elementary school children are expected to score 541 on the CSHQ [20], whereas about 10% are considered to have sleep disturbance when considering together the scores of the CSHQ, SSR and teacher reports of daytime sleepiness [36]. Our subjects had a somewhat higher frequency of disordered sleep and higher average CSHQ scores than community school children studied in North [39,40]. Nevertheless, the relationships between poor sleep, pain and quality of life that we saw may not be specific to children with rheumatic diseases, and may be a general phenomenon in children [4143]. Likely because of the small numbers, we did not find the expected relationships between comorbid personal illnesses, or family illnesses, thought to adversely affect sleep. FM [4446], attention-deficit hyperactivity disorder [39,47], anxiety or other psychiatric disorder [48,49] and a family history of sleep disorder [50] have all been associated with poor sleep in previous studies; there were so few subjects with each of these problems in our sample that we were likely underpowered to detect potentially important relationships. Moreover, we did not ask about other potential sleep predictors such as disease duration, functional ability (outside that measured by the PedsQL), socioeconomic status, housing status and education. Given the anonymous nature of our data collection, we were unable to get this information from clinic charts. Our cohort appears to be generalizable to other JIA and JDM patients. This is a prevalence sample of patients, and our subjects should therefore be representative of patients seen at any one point in clinical practice rather than seen at diagnosis or at any other extreme point in the disease. Our equal sex ratio among the patients with systemic JIA and the predominance of female subjects in the other groups are similar to previous series [51,52]. Our cohort consists only of patients between the ages of 8 and 16 years, which must be considered when assessing our findings. Our average parents' global assessment of overall disease activity was higher among patients with systemic JIA and polyarticular JIA when compared with patients with oligoarticular JIA, and this is similar to previous reports [22,51,53,54]; our overall disease activity was low, again similar to other series [2,55]. Our average VAS pain was similar to previous reports assessing adolescents with JIA [2,56]. Whereas we collected little self-report data regarding the severity of our JDM subjects, we feel the random sampling process, high response rate and relatively large number of JDM respondents ensured a high likelihood of representativeness. Few studies have examined sleep in children with JIA. Our finding of a high prevalence of sleep disturbance is similar to a previous report of 74 subjects with limb pain (25 with JIA); in that study 40 (54%) patients had insomnia [8]. Similarly, a study of 21 children with active polyarticular arthritis demonstrated increased sleep fragmentation compared with controls and a strong correlation between alpha activity and pain [57]. Previously a few small studies have demonstrated that sleep is interrupted among patients with JIA; this included poor sleep quality, parasomnia, daytime sleepiness, sleep fragmentation, increased cyclic alternating patterns and sleep disordered breathing [4,14,58]. Similar to our finding, Zamir et al. [4] demonstrated that the sleep abnormality in JIA patients was associated with pain. Other studies have failed to show an association between sleep disturbance and disease activity; however, in one study there was a high correlation between SSR and average pain [14], and in another, total sleep time and arousals were associated with symptoms of fatigue [59]. Laplant et al. [8] have similarly shown that an impaired PedsQL score is related to insomnia. Since our research was directed towards comparison of sleep and fatigue within the major subtypes of JIA, and the comparison between JIA and JDM, we did not include healthy control subjects. We are satisfied that it has been adequately proven that sleep is poorer in children with JIA than in the general population of children, as demonstrated in the discussion above. To our knowledge, this is the first study to systematically address the problem of poor sleep in JDM. The fact that sleep abnormalities were equivalent to those seen in our JIA subjects suggests that, in general, chronic inflammatory diseases-and perhaps their associated treatments-may have similar effects on sleep. Much more is known about sleep in adults with RA. One of the largest studies evaluated 8676 patients with RA and a comparison group of 1364 subjects without FM and without inflammatory disorders. In that study the investigators found that sleep disturbance is increased in RA, and 2542% of the variability in sleep disturbance can be attributed to RA. There was a significant positive correlation between pain, mood, disease activity and sleep disturbance [60]. Several studies have shown that pain is more prevalent in JIA than had been previously recognized. Sherry et al. [61] found that 86% of 293 children with arthritis reported pain during a routine clinic visit. Schanberg et al. [62] demonstrated, using a daily paper diary, that school-aged children with chronic arthritis report pain on an average of 73% of days. More recently, Stinson et al. developed and validated an electronic multi-dimensional pain diary for youth with JIA. On average, participants reported mild pain intensity, pain unpleasantness and pain interference over the course of the 2-week study period (they recorded pain three times per day for 14 days). During this 2-week period, 17.1% reported pain on every entry [56]. Surprisingly, pain was as high in our JDM subjects-a finding that has not been widely reported. Measured disease activity seems to predict only a portion of children's pain ratings (8-28%) [6365]; other factors may influence the pain experience. Our findings raise the possibility that pain may influence the quality of sleep, or that poor quality sleep may influence the perception of pain and increase a child's pain ratings. Several previous studies-in other conditions-have also demonstrated an association between poor sleep quality and chronic widespread pain [9,45,46,66,67]. As in our study, those investigators were unable to determine whether the sleep disturbance preceded or was a consequence of chronic pain. However, induced periods of night-time mini-arousal have been shown to induce symptoms of chronic widespread pain, while removal of these arousal periods are associated with resolution of pain [68]. Furthermore, sleep deprivation studies have shown in healthy volunteers that poor sleep lowers pressure pain thresholds [69]. Recently Davies et al. [70] demonstrated in 1061 patients with chronic widespread pain that improvement in restorative sleep was associated with the resolution of symptoms of pain. It appears reasonable that poor sleep may have, in part, caused pain in our subjects. We think it most likely that, in fact, the relationship between pain, poor sleep and fatigue may be a vicious cycle-with a significant influence on quality of life [58]. In our cohort, fatigue was related to DMARD therapy (which consisted mostly of MTX). Fatigue is listed as one of the adverse effects of MTX in its product monograph; however, the relationship between fatigue and MTX in rheumatic and other diseases does not appear to have been widely studied. Husted et al. [71] have recently shown, in 499 patients with PsA, that among other variables, MTX therapy was associated with fatigue [71]. It is unclear from our study whether MTX causes fatigue or whether this relationship is a result of confounding-possibly by severity of disease. Our results would suggest that fatigue is strongly associated with poorer quality of life. This relationship has been widely studied in other conditions, e.g. lymphoma [72], vasculitis [73], chronic insomnia [74], multiple sclerosis [75], IBD [76] and a variety of chronic illnesses of childhood [77]. Fatigue may be an appropriate therapeutic target for improving quality of life. When interpreting our results, limitations due to our design must be considered. We used a cross-sectional design; despite correlations between disease, pain, fatigue and poor sleep, we cannot make definitive causal statements. We sampled a relatively small number of patients in each disease subtype; however, our sampling process and high response rate ensured that our subjects were representative, and the strong relationships were highly statistically significant. It is possible that due to the small numbers we missed relationships of smaller magnitude. In addition, we measured disease activity using parent-and patient-reported subjective measures, and the joint mannequin technique that we used has only been validated for adult patients. This may have led to imprecision and weakened the reported relationships; it is possible that the relationships between disease activity and sleep disturbance are stronger than what we report. Given the nature of our data collection, medication use was self-reported. It is possible that subjects underreported their use of medications due to poor understanding, and that we missed other important correlations with sleep. Conversely, our results may more closely represent what patients are actually taking than what is prescribed. Finally, we did not include a healthy control population, as we felt that the differences in sleep between JIA patients and healthy controls had been adequately demonstrated and we were interested in comparisons with a disease control group. In summary, sleep disturbance and fatigue are an important problem in JIA and JDM patients. Sleep disturbance and fatigue are both correlated to disease activity. Increased pain is associated with more sleep disturbance and more fatigue, and these appear to negatively influence quality of life. From our data we hypothesize that increased disease activity leads to poorer sleep, which then adversely affects pain and quality of life. Further study focusing on mechanisms of impaired sleep is warranted to clarify these relationships. It is likely that a better understanding of the role of disordered sleep in childhood rheumatic disease will lead to therapeutic strategies that will improve pain and quality of life. Rheumatology key messages . Sleep disturbance and fatigue are prevalent among children with different rheumatic diseases. . Sleep disturbance and fatigue are associated with increased pain and decreased quality of life. . Strategies for improving sleep/fatigue should be studied for children with rheumatic illness. Sleep disturbance in JIA and JDM
v3-fos-license
2020-02-12T16:09:04.438Z
2020-02-12T00:00:00.000
211080939
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-020-58966-9.pdf", "pdf_hash": "c21e50d09cd5df5f682eff6503a07d9d6ab783b9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41781", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "c21e50d09cd5df5f682eff6503a07d9d6ab783b9", "year": 2020 }
pes2o/s2orc
Single subject and group whole-brain fMRI mapping of male genital sensation at 7 Tesla Processing of genital sensations in the central nervous system of humans is still poorly understood. Current knowledge is mainly based on neuroimaging studies using electroencephalography (EEG), magneto-encephalography (MEG), and 1.5- or 3- Tesla (T) functional magnetic resonance imaging (fMRI), all of which suffer from limited spatial resolution and sensitivity, thereby relying on group analyses to reveal significant data. Here, we studied the impact of passive, yet non-arousing, tactile stimulation of the penile shaft using ultra-high field 7T fMRI. With this approach, penile stimulation evoked significant activations in distinct areas of the primary and secondary somatosensory cortices (S1 & S2), premotor cortex, insula, midcingulate gyrus, prefrontal cortex, thalamus and cerebellum, both at single subject and group level. Passive tactile stimulation of the feet, studied for control, also evoked significant activation in S1, S2, insula, thalamus and cerebellum, but predominantly, yet not exclusively, in areas that could be segregated from those associated with penile stimulation. Evaluation of the whole-brain activation patterns and connectivity analyses indicate that genital sensations following passive stimulation are, unlike those following feet stimulation, processed in both sensorimotor and affective regions. representations as well as the need to treat patients as individuals, it has become eminently important to study human brain function at the level of individuals [13][14][15] . Moreover, signal advantages gained at 7 T can not only be observed at a limited field of view such as in S1, but also in more extended whole-brain acquisitions 16 including the cerebellum 17 . In the present study, we used 7T fMRI to acquire high-resolution neural representations of male genital sensation in the whole-brain at both single subject and group level. In order to do so, subjects underwent passive tactile stimulation of the genitalia in a way that minimized sexual arousal. Passive tactile stimulation of the medial aspect of the feet, performed in an equivalent manner as genital stimulation, served as a control task. The feet were deliberately chosen not only because of their location adjacent to the genitalia in the homunculus, but also as a more emotionally neutral stimulus. In addition, the medial aspect of the feet is involved in successful electrical therapies for an overactive bladder, like posterior tibial nerve stimulation 18 or transcutaneous electrical nerve stimulation 19 . Therefore, current findings regarding supraspinal activation during tactile stimulation of the feet could provide more insight into which regions are targeted and affected by this treatment modality. We hypothesized that the representation of the external genitalia is located in the groin region of S1, lateral to the feet. Furthermore, we hypothesized that passive tactile stimulation of the genitalia would lead to activation of brain regions associated with processing of both sensorimotor discriminatory (S1 & S2) and affective (insula, MCC and mPFC) properties of touch at both single subject and group level. In contrast, passive tactile stimulation of the feet would lead to activation of brain regions predominately associated with processing discriminatory properties of touch. Results Four subjects were excluded from further analysis due to excessive spike head motion. The remaining thirteen subjects were included in first and second level of analyses. None of the subjects had an erection whilst undergoing tactile stimulation by the experimenter. Tactile stimulation of the penile shaft evoked significant activation superomedial and inferolateral in S1, S2, ventral premotor cortex (vPMC), posterior and anterior insula, posterior midcingulate gyrus (pMCG), mPFC, thalamus and cerebellum. Responses observed for left and right brushing were similar, and therefore added into a single contrast. Tactile stimulation of the feet evoked significant activation superomedial and inferolateral in S1, S2, vPMC, posterior insula, thalamus and the cerebellum. Representations in S1. Tactile stimulation of the penile shaft and the feet evoked significant activation (p < 0.05 family wise error (FWE)) during whole-brain analysis in 11 out of 13 subjects (Fig. 1). Bilateral activation was observed superomedial and inferolateral in response to stimulation of the penile shaft. Unilateral activation was observed superomedial in the right hemisphere in response to stimulation of the left foot. Bilateral activation was observed superomedial in response to stimulation of the right foot. In addition, we also observed unilateral activation inferolateral in the left hemisphere. In single subjects, feet activation clusters extended anteromedial in S1 along the postcentral gyrus. Summation of these elongated clusters resulted in fractioning of feet activation clusters at group level (Fig. 2). Slight overlap was seen between penile shaft and feet activation clusters. Nevertheless, shaft clusters were consistently found lateral to the feet in the left and right hemispheres, both at single subject and group level. Whole-brain results. To further determine which cortical areas are implicated in processing tactile input from the genitalia and the feet outside of S1, we also examined whole-brain responses (Table 1). Tactile stimulation of the penile shaft elicited bilateral activations in S2, posterior and anterior insula, vPMC and the cerebellum. In single subjects, unilateral and bilateral activations were seen in the pMCG, mPFC and thalamus. At group level, unilateral activation in the pMCG was seen in the left hemisphere and bilateral activation was seen in the mPFC. We also observed subcortical activation in the medial and posterior regions of the thalamus, correlating with reported locations of the medial dorsal (MD) and ventral posterolateral (VPL) nuclei 20,21 . Tactile stimulation of the left foot elicited bilateral activations in S2, unilateral activation in the posterior insula in the right hemisphere, and unilateral activation in the cerebellum in the left hemisphere. Subcortical activation was observed posterior in the thalamus in the right hemisphere, presumably the VPL. Tactile stimulation of the right foot elicited bilateral activations in S2, posterior insula, and unilateral activation in the vPMC in the left hemisphere. Subcortical activation was observed posterior in the thalamus (VPL) in both hemispheres. Representations in the cerebellum. At lower statistical thresholds, we observed significant activation in the anterior (lobules I-IV) and superior posterior lobe (lobule VI) of the cerebellum at both single subject (p < 0.001 uncorrected for multiple comparisons) and group level (p < 0.005 uncorrected for multiple comparisons) (Fig. 3). Shaft and feet representations were found in symmetrical locations in both cerebellar hemispheres across 8 out of 13 single subjects (Fig. 3). Tactile stimulation of the penile shaft evoked significant bilateral activation in lobule VI in the posterior lobe, part of the cerebrocerebellum. In 2 out of 13 individuals (#11, #13) activation was also seen just posterior to feet clusters in lobule IV. Tactile stimulation of the feet evoked significant unilateral activation in lobules IV in the anterior lobe, part of the spinocerebellum. In 4 out of 13 individuals (#04, 07, 12, 13) activation was also seen more posterior adjacent to the shaft clusters in lobule VI. At group level this was seen for the right foot in the contralateral cerebellar hemisphere (Fig. 3). functional connectivity. We also assessed the functional connectivity between timeseries of regions of interest (ROIs) for both functional tasks separately (Figs. 4 and 5). For the penile shaft, timeseries from the superomedial and inferolateral S1, S2, vPMC, posterior insula and the right anterior insula showed moderate correlation (range ρ 0.44-0.60). The left anterior insula showed weak correlation (range ρ 0.36-0.49) with the www.nature.com/scientificreports www.nature.com/scientificreports/ superomedial and inferolateral S1, S2, vPMC and posterior insula. Timeseries from the pMCG and cerebellum showed weak correlations overall with other ROIs. For the left foot, timeseries from superomedial S1 in the right hemisphere showed a strong correlation with the S2 and posterior insula ROIs on the same side, whereas correlation with S2 in the left hemisphere was weaker (Fig. 5A). Timeseries from the cerebellum showed weak correlations overall with other ROIs. A similar, yet inverted, correlation pattern was observed for the right foot. Timeseries from superomedial S1 in left hemisphere showed high correlation with the S2 and posterior insula on the same side, whereas correlation with S2 in the right hemisphere was weaker (Fig. 5B). Timeseries from the cerebellum showed weak correlations overall with other ROIs. Distances. The distances between activation foci of penile shaft and feet representations in S1 and the cerebellum were measured (Table 2). In S1, vertex distances were measured over the cortical surfaces created during the inflation process in Freesurfer. This gives a more true representation of distances between cortical representations due to the high degree of folding of the postcentral gyrus. Since the cerebellum is not included in the inflation process, Euclidean distances were measured between cerebellar activation foci of penile shaft and feet representations. The distance between neighbouring body representations reflects the amount of cortical space taken up by those representations 22 . For example, a piano player will have larger digit representations than average, leading to a measurably larger distance between the thumb and the little finger. If, for example through underuse, the penile shaft representation gets smaller, this should have an effect on the distance between penile shaft and foot activation foci. Discussion The present study is the first to investigate genital touch with the extensive field of view as supported by 7T imaging, and by doing so it provides novel data on the precise representations of the genitalia in the human brain, in particular those of hindbrain areas like the cerebellum that were often omitted with other approaches. By exploiting the increased BOLD sensitivity and specificity available at 7T, we obtained data with high spatial acuity and anatomical specificity. These clearly demonstrate that the genitalia are located in the groin region and not below the feet in S1. Furthermore, considerable differences were observed in whole-brain activation patterns in response to tactile stimulation of the genitalia as opposed to the feet. Tactile stimulation of the penile shaft evoked significant activations of discriminative (sensorimotor) and affective (emotional) brain regions, whereas tactile stimulation of the feet evoked significant activations of mainly discriminative brain regions. In addition, functional connectivity was assessed between activation clusters for both the genitalia and feet. This is the first study to report on functional connectivity of genital sensation. Some have described the representation of the genitalia to be positioned in the medial wall below the representation of the feet in S1 4-6 , while others describe a more dorsolateral representation between the trunk and leg 7,8 . At both single subject and group level, our data clearly indicates that the genitalia are represented dorsolateral of the feet in S1 (Figs. 1 and 2), similar to what has been reported by previous studies using 3T fMRI 7,8 . Animal studies investigating genital representations have also described this location measuring extracellular recordings in primates 23 and more recently using cortical microstimulation in rats 24 . Despite applying unilateral Table 1. Whole-brain group activation in response to stimulation of the penile shaft and feet versus rest. Brain regions, MNI coordinates and peak t-values are listed. All activation for the penile shaft is reported using a global null conjunction analysis (p < 0.005 uncorrected for multiple comparisons, t-value > 1.52). All activation for the left and right feet are reported using a one sample t-test (p < 0.005 uncorrected for multiple comparisons, t-value > 3.05). S1: primary somatosensory cortex; S2: secondary somatosensory cortex; vPMC: ventral premotor cortex; pMCG: posterior midcingulate gyrus; mPFC: medial prefrontal cortex. Scientific RepoRtS | (2020) 10:2487 | https://doi.org/10.1038/s41598-020-58966-9 www.nature.com/scientificreports www.nature.com/scientificreports/ stimulation to the penile shaft, we observed bilateral activation in S1 irrespective of stimulation side. This corresponds well with findings showing that cutaneous regions situated in the midline of the body are represented bilaterally in S1 25 . Interestingly, studies using electrical stimulation of the DNP to locate the genitalia in S1 repeatedly demonstrated activation deep in the interhemispheric fissure. Cortical evoked responses elicited by electrical stimulation of the DNP were consistently located beneath those elicited by stimulation of the posterior tibial nerve [4][5][6] . It should be noted, however, that earlier techniques (e.g. EEG/MEG) used to measure brain responses offered poor spatial resolution. In addition, it is known that differences in evoked brain potentials can be observed when comparing electrical to tactile stimuli, further questioning this method when investigating the processing of physiological somatosensory stimuli 26 . In the present study, passive tactile stimulation of the medial aspect of the feet served as a control, analogous to electrical stimulation of the posterior tibial nerve. We expected to see activation in S1 lateralize in the contralateral hemisphere, as can be seen during stimulation of the left foot (Table 1). During stimulation of the right foot, however, significant yet weaker activation was also observed in the ipsilateral hemisphere ( Table 1). Absence of lateralization in S1 has also been demonstrated during tactile stimulation of only the right and not the left hand in right-handed subjects 27 . The authors suggested this asymmetry is associated with hand preference and proficiency. Humans not only have a preference for left-or right-handedness, but also left-or right-footedness which can be seen in for instance football players 28 . In the present study we did not assess footedness prior to inclusion, however, we suggest ipsilateral activation in S1 during stimulation of the right foot may be the result of right-footedness. Activation in S1 evoked during tactile stimulation of the feet extended anteromedial along the postcentral gyrus. Beneath this activation cluster, in most single subjects and at group level, activation was also observed deep in the interhemispheric fissure in response to tactile stimulation of the genitalia ( Fig. 2; medial-view left hemisphere). At a closer examination, this area corresponds to the pMCG and not S1, which had been suggested previously 4 . www.nature.com/scientificreports www.nature.com/scientificreports/ Although it was not the objective of this study, these data also allow a comparison of habituation effects during the functional tasks. Inspection of mean signal intensity curves superomedial in S1 show comparable habituation between shaft and feet stimulation (Fig. 6), indicating that task saliency was comparable. Penile shaft and feet representations showed several areas of overlap (Fig. 1). Correspondingly, in women, overlap has been demonstrated between the genitalia and nipple 29 . This may have impeded earlier intraoperative mapping experiments by Penfield and colleagues, and it may partly clarify why genital sensations were so hard to induce 2 . Furthermore, this finding may also help to provide insight into why electrical therapies such as dorsal genital nerve 30 and posterior tibial nerve 18,19 stimulation share a similar inhibitory effect on bladder activity. Inferolateral in S1, we observed robust bilateral activation during tactile stimulation of the penile shaft and, to a lesser extent, in the left hemisphere during tactile stimulation of the right foot. Activation inferolateral in S1 has also been described during electrical stimulation of the clitoris 31 and mechanical stimulation of the rectum suggesting this area may be involved in processing pelvic sensory information 32 . Furthermore, it has been argued that this cluster is in close proximity to the representation of the face in S1 and represents stimulus www.nature.com/scientificreports www.nature.com/scientificreports/ related activation rather than face/mouth movements due to discomfort 31 . We agree with this observation for the following reasoning. First, robust bilateral activation inferolateral in S1 was consistently seen in single subjects and the group. Prior to sensory tasks, subjects were instructed to lie still, breathe as they normally would and not make any movements. The current stimulus (brushing with a toothbrush) was well tolerated by participants and none reported discomfort after the scanning procedure. Hence, it is unlikely subjects consequently made similar mouth/face movements due to discomfort while they were explicitly instructed not to do so. Second, inferolateral activation clusters in S1 showed high functional connectivity to the superomedial S1 clusters and also other associative sensorimotor areas such as S2, the insula and vPMC (Fig. 4). This suggests that activation in this region is related to tactile genital stimulation and not due to co-occurring mouth/face movements. Bilateral activation of the vPMC was observed during tactile stimulation of the penile shaft and unilateral activation of the vPMC was observed in the left hemisphere during tactile stimulation of the right foot. In both primates and humans, this area has been described to be sensitive to multisensory input, including tactile stimuli 33,34 . Accordingly, previous studies have demonstrated similar activation of this area during both tactile 8 and electrical 31 stimulation of the genitalia. Activation of the posterior insula was observed during tactile stimulation of the penile shaft and the feet, whereas activation of the anterior insula was only observed during genital stimulation. Posterior insula activation has been described earlier and is associated with gentle touch processing 9,35 . Stimulation paradigms used in these studies included gentle stroking with a brush, similar to the stroking paradigm with a toothbrush in the present study. On the other hand, activation of the anterior insula was observed during stimulation of the penile shaft and not the feet. Other cortical areas solely activated during stimulation of the penile shaft include the pMCG and mPFC. These areas have been associated with the processing of affective/emotional properties of touch 10,36 , which fits well with the specific character of sensations (i.e. sexual or erotic) that may arise during tactile stimulation of the genitalia as opposed to the feet. In the current study, however, we did not assess potential sexual or erotic sensations experienced during stimulation making direct correlations not possible. Future research including psychometric measurements (e.g. by Table 2. Distances between group penile shaft and foot activation foci in S1 and cerebellum. The distances between penile shaft and foot activation foci in both hemispheres at group level. Brain regions in which distances were measured are indicated in brackets. Distances measured in Euclidean space are indicated with an asterisk. S1: primary somatosensory cortex, Cb: cerebellum. www.nature.com/scientificreports www.nature.com/scientificreports/ means of questionnaires) with both arousing and non-arousing stimuli is needed to determine whether activation of affective/emotional brain regions correlates with the perception of such sensations. For the penile shaft, activation was observed posterior in the thalamus in the right hemisphere, corresponding to the VPL. In the left hemisphere activation as observed more anteriorly, corresponding to the MD ( Table 1). Sensations of touch are processed through the dorsal column-medial lemniscus pathway projecting to the thalamus, in particular the VPL, and from thereon to the somatosensory cortex 20 . Our findings suggest the MD nucleus is also involved in processing genital touch. Accordingly, activation was also observed in the mPFC, where MD afferents project to 20 . In addition, electrophysiological studies in cats and rats have identified multiple bilateral subregions of the thalamus receiving inputs from the genital tract, including the VPL and MD 24,37 . Here, unilateral activation of the VPL in the left hemisphere and MD in the right hemisphere may, however, be the result of the small cluster size and weak BOLD signals measured in the thalamus using the current whole-brain acquisition protocol (Table 1). On the other hand, for the left foot, contralateral VPL activation was observed and for the right foot bilateral VPL activation was observed. Accordingly, contralateral S1 activation was observed for the left foot and bilateral S1 activation for the right foot. Here we also mapped the representation of the penile shaft and feet in the anterior and posterior lobes of the cerebellum (Fig. 3). In line with previous findings, tactile stimulation of the feet evoked ipsilateral activation in lobule IV 38,39 . In some single subjects, activation was also observed more posterior in the cerebellum in both ipsilateral and contralateral hemispheres. At the group level, sparse activation was observed only in the contralateral hemisphere. We expected the genitalia would be represented just posterior to the feet in lobule IV, supporting previous fMRI studies in humans describing a somatotopical layout of the body in the anterior lobe of the cerebellum orientated anterior-posteriorly [38][39][40] . Surprisingly, in most single subjects and at the group level, genital representations were found more posterior in lobule VI in both cerebellar hemispheres (Fig. 3), seemingly posterior to the cerebellar hand representations 40 . In 2 individuals (#11, #13), activation was also observed adjacent to the feet representations in lobule VI, where we would have expected the genitalia to be represented. In these subjects, though, we also observed activation more posterior in lobule VI. When comparing cerebellar representations to those found in S1, the feet representations found anterior in lobule IV appear to be part of the somatomotor network 41 also including the superomedial S1 feet and shaft representations. In contrast, the shaft representations found posteriorly in lobule VI may be part of a more associative frontoparietal network 41 also incorporating the inferolateral S1 cluster. In the present study, timeseries from cerebellar clusters showed low functional connectivity to any of the other ROIs. Regions that showed the highest degree of connectivity were the inferolateral S1 and vPMC in the right hemisphere. When inspecting the topographic organization of cerebrocerebellar circuits based on intrinsic functional connectivity, lobule VI is largely mapped to the inferolateral S1 and vPMC 41 . Our data suggest that these cerebellar representations of the genitalia in lobule VI belong to this particular cerebrocerebellar network. The relatively low functional connectivity demonstrated here may be the result of methodological differences such as task-based functional connectivity vs resting-state connectivity and sample sizes N = 9 vs N = 1000. Functional connectivity was assessed for both functional tasks, no previous studies have reported on this. For the penile shaft, this was mainly done to see if we could demonstrate separate cerebral networks involved in processing genital touch (i.e. discriminative vs affective). Brain regions such as superomedial and inferolateral S1, S2, vPMC, the posterior insula and right anterior insula showed high functional connectivity to each other. On the other hand, the pMCG, left anterior insula and cerebellum showed low overall connectivity to other brain regions. Interestingly, for the feet, superomedial S1 representations showed high connectivity with S2 and the posterior insula on the ipsilateral side, whereas connectivity with S2 on the contralateral side was low. This suggests that, although bilateral activation of S2 was observed, there is some lateralization and dominance of the contralateral S2. Moreover, we observed high functional connectivity between the posterior insula and S2 in the same hemisphere, which is in accordance with the finding that these cortical areas are reciprocally connected 42 . We acknowledge, however, that the task-based functional connectivity analysis in our study does not give a measure of intrinsic connectivity of neural networks in contrast to resting-state fMRI. In addition, measures of functional connectivity demonstrated here are related to our stimulation paradigm (i.e. brushing with a toothbrush). Other studies using different stimulation paradigms to investigate genital touch may produce different connectivity patterns. The use of ultra-high field (7T) fMRI here provided considerable benefits compared to neuroimaging techniques used in previous studies investigating genital touch such as EEG, MEG and 3T fMRI [4][5][6][7][8] . Due to significant gains in SNR at 7T, we were able to acquire data with much higher spatial resolution (1.77 × 1.77 × 1.75 mm 3 ), unmatched by previous studies. Furthermore, while no direct comparisons were made, it is plausible 7T fMRI offers increased BOLD sensitivity and allows detection of smaller effects facilitating increases in statistical strength when conducting both single subject and group analyses compared to fMRI at lower field strengths (1.5-& 3T) 17,43 . On the other hand, the relative contribution of physiological noise also increases at higher field strengths. 7T fMRI for instance, is more susceptible to false positive activation caused by subject head motion, potentially leading to higher exclusion rates in comparison to fMRI at lower field strengths. By employing a multiband EPI-sequence, whole-brain coverage including the cerebellum was achieved whilst preserving high spatiotemporal resolution. The current study is the first investigating genital sensation with such an extensive field-of-view and thereby the first to report on cerebellar representations of male genital sensation. This achievement may partly result from the fact that we placed a dielectric pad containing CaTiO 3 posterior of subjects' heads, which has been shown to increase both cerebellar T1-weighted anatomical coverage and detection of T2*-weighted BOLD signals 44 . conclusion In conclusion, using 7T fMRI, we present neural representations of genital sensation with unprecedented spatial resolution and whole-brain coverage in both single subjects and the group. We clearly show the genitalia are represented in the groin region in S1 and not below the feet. Whole-brain responses and additional connectivity analyses revealed that passive penile stimulation evoked significant activation in brain regions that can be segregated from those associated with feet stimulation. Genital sensations are processed in both sensorimotor and affective brain regions, whereas feet sensations are processed in sensorimotor regions. These differences may contribute to the specific character of sensations (i.e. sexual or erotic) that are associated with stimulation of the external genitalia. Materials and Methods Subjects. This study was conducted in agreement with the principles specified by the Declaration of Helsinki. Approval for the current study was given by the Medical Ethics Committee of the Erasmus Medical Center Rotterdam (METC 2015-451). All subjects provided written informed consent before entering the study. 17 healthy right-handed male subjects (mean age ± SD: 29.6 ± 7.8 years) participated in this study. Subjects were asked to take off their trousers and placed in a supine position on the MRI-bed. Stimuli and functional paradigm. All subjects completed the same scanning protocol, consisting of functional runs followed by a T1-weighted anatomical scan of the whole-brain for co-registration of functional data. Two sensory tasks were performed using a block paradigm. These tasks included subjects undergoing tactile stimulation of the penile shaft and medial aspect of the left and right foot. During both runs, an experimenter was positioned at the entrance of the scanner bore. Tactile stimulation was delivered using a commercially available toothbrush attached to a stick. The experimenter received audio cues indicating when and where to brush on MR-compatible headphones, generated in MATLAB using the Psychophysics Toolbox Version 3 (http://psychtoolbox.org/). The left and right penile shaft were brushed for a duration of 20 s respectively, followed by 20 s of rest (no brushing). This sequence was repeated 10 times with an additional rest period of 20 s at the start of both runs, resulting in a total scan time of 620 s per run. Brushing was done in a proximal to distal direction at a frequency of approximately 1 Hz and performed by the same experimenter for all subjects to minimize inter-subject stimulation variability 13 . For this study, a toothbrush was used to deliver tactile stimulation with the aim to mimic a physiological stimulus without inducing sexual arousal. The brushing of a toothbrush is a good alternative for human touch 7,45 and can comfortably be executed while standing at the entrance of the scanner bore. Subjects were given a towel which they were instructed to place on the abdomen. Subsequently, subjects were instructed to place the penis on the towel in order to prevent skin-to-skin contact with the thigh and abdomen during the tactile stimulation. Prior to the actual scanning session, all subjects underwent a training session in a mock scanner. This gave subjects the opportunity to get acquainted with the tactile stimulus (brushing of a toothbrush). During this training session, tactile stimulation was delivered in a similar manner as described above. Data acquisition. All functional and structural data were acquired on a 7T MRI scanner (Philips Achieva) using a volume transmit coil and a 32-channel receive coil (Nova Medical). Functional data was acquired using a multiband echo planar imaging (mb-EPI) sequence with multiband factor 2. Whole-brain coverage, including the anterior lobe of the cerebellum, was achieved using the following parameters: voxel size 1.77 × 1.77 × 1.75 mm 3 ; matrix size: 104 × 127; FOV = 184 × 223 mm 2 ; number of slices: 70; TR/TE = 2000/25 ms; flip angle = 70°; in-plane SENSE factor R = 3. Whole-brain anatomical data was acquired using the MPRAGE sequence with the following parameters: voxel size 0.7 × 0.7 × 0.7 mm 3 , matrix size: 352 × 353, FOV = 246 mm; number of slices: 249; TR/TE = 4.4/1.97 s, SENSE factors R = 1.6 (anterior-posterior) and R = 1.5 (right-left); total acquisition time 8'35". In addition, to account for signal loss in infratentorial areas, a dielectric pad containing calcium titanate (CaTiO 3 ) 46 was placed posterior of the subjects' heads 46 . image preprocessing. All data was reconstructed on an offline workstation using dedicated reconstruction software (ReconFrame, Gyrotools, Zürich, Switzerland). Further data processing was done in SPM12 (Wellcome Trust Center for Neuroimaging, London, UK). Pre-processing steps included joint image realignment of all four functional runs, co-registration of the anatomical image to the resulting mean functional image and smoothing of functional data with a Gaussian kernel (FWHM 2.5 mm). For the extraction of peak activation coordinates, functional data was normalized to the standardized brain template of the Montreal Neurological Institute (MNI). Additionally, inflated cortical surfaces were created in Freesurfer (http://surfer.nmr.harvard.edu/) using single subject anatomical images and the MNI template. In order to aid the inflation process, all images were first bias corrected (bias FWHM = 18, sampling distance = 2) and resliced to 1 mm isotropic in SPM. Whole-brain analyses. First level statistical analysis was conducted using the General Linear Model (GLM). Each functional task was modeled as a boxcar convolved with a canonical hemodynamic response function (HRF) and temporal derivative as basis functions. Realignment parameters were added as nuisance regressors to account for confounding motion effects. The response for each task was estimated independently from the others. Activation maps for tactile stimulation of the left and right penile shaft showed a high degree of overlap in both hemispheres, and were therefore conjoined into a single contrast using a global null conjunction analysis 47 thresholded at p < 0.05 voxel-based FWE). Activation maps for tactile stimulation of the left and right foot were generated as separate contrasts and thresholded at p < 0.05 FWE. Second level statistical analysis was conducted using a one-sample t-test on individuals' task responses. Likewise, at group level, left and right shaft contrasts were conjoined using a global null conjunction analysis p < 0.005 uncorrected for multiple comparisons. Activation maps for tactile stimulation of the left and right foot were again thresholded at p < 0.005 uncorrected for multiple comparisons. Both single subject and group level cortical activation maps were projected on inflated cortical surfaces created in Freesurfer and sampled halfway the mid-cortical depth in order to avoid vascular artifacts at the pial surface. Scientific RepoRtS | (2020) 10:2487 | https://doi.org/10.1038/s41598-020-58966-9 www.nature.com/scientificreports www.nature.com/scientificreports/ functional connectivity analyses. To further evaluate activation of different networks (i.e. discriminative vs affective), we computed the correlation between timeseries from different ROIs in single subjects. Timeseries were extracted from individual pre-processed contrast images, which were realigned, co-registered and smoothed as described earlier. For the penile shaft, ROIs included were S1, S2, vPMC, posterior and anterior insula, pMCG and the cerebellum. ROIs were isolated using individuals' contrast images and successfully identified in 9 out of 13 individuals. Activation in the thalamus and mPFC could only be observed in 4 and 5 subjects respectively, and were therefore not included in the functional connectivity analysis. For the feet, included ROIs were the S1, S2, posterior insula and cerebellum. Again, ROIs were isolated using individuals' contrast images and were successfully isolated in 10 out of 13 individuals. Overlapping activation clusters were manually separated in ITK-SNAP (http://www.itksnap.org/). Subsequently, voxel timeseries were extracted from each ROI per single subject and denoised for signal arising from white matter, gray matter and cerebrospinal fluid using linear regression. Connectivity was defined as the linear correlation between timeseries of different ROIs, which was computed with the Pearson's correlation coefficient. Single subject correlation matrices were used to compute a mean correlation matrix for both the penile shaft and feet. Data availability The datasets analyzed during the current study are available in NIFTI format for interested researchers. Please contact the corresponding author to make a request.
v3-fos-license
2019-12-22T14:02:54.870Z
2019-12-20T00:00:00.000
209433930
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/20/1/52/pdf", "pdf_hash": "c9d698e251ff16c60b7a345b73fbaf3c6e40c601", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41784", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "9c35012b268e2f293b3532cd3d83c09bdd01cbf2", "year": 2019 }
pes2o/s2orc
Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes † This paper proposes a simple extrinsic calibration method for a multi-sensor system which consists of six image cameras and a 16-channel 3D LiDAR sensor using a planar chessboard. The six cameras are mounted on a specially designed hexagonal plate to capture omnidirectional images and the LiDAR sensor is mounted on the top of the plates to capture 3D points in 360 degrees. Considering each camera–LiDAR combination as an independent multi-sensor unit, the rotation and translation between the two sensor coordinates are calibrated. The 2D chessboard corners in the camera image are reprojected into 3D space to fit to a 3D plane with respect to the camera coordinate system. The corresponding 3D point data that scan the chessboard are used to fit to another 3D plane with respect to the LiDAR coordinate system. The rotation matrix is calculated by aligning normal vectors of the corresponding planes. In addition, an arbitrary point on the 3D camera plane is projected to a 3D point on the LiDAR plane, and the distance between the two points are iteratively minimized to estimate the translation matrix. At least three or more planes are used to find accurate external parameters between the coordinate systems. Finally, the estimated transformation is refined using the distance between all chessboard 3D points and the LiDAR plane. In the experiments, quantitative error analysis is done using a simulation tool and real test sequences are also used for calibration consistency analysis. Introduction Three-dimensional mapping and localization techniques for autonomous vehicles [1][2][3] have been investigated extensively for over the years. In recent investigations, visual and depth information is commonly used to reconstruct accurate 3D maps containing shape and color information of an environment. Especially, a 360-degree LiDAR sensor and multi-view vision cameras are commonly used to acquire omnidirectional shape and color information from a vehicle. Accurate extrinsic calibration between cameras and LiDAR sensors increase the reliability of data fusions between the color and shape information of 3D point data. Many extrinsic calibration methods between 360-degree LiDAR and vision camera have been proposed. However, they mostly require a specially designed calibration board or calibration object [4][5][6][7][8]. Zhou et al. [4] used a planar calibration board that has three circular holes. The center coordinates of the circular holes with respect to both the LiDAR and camera are used to align the two sensor's coordinate axes. Pusztai and Hajder [5] proposed an extrinsic calibration method using a cube-type object. The 3D faces of the cube are fitted to planes using 3D points from a 3D LiDAR and they are again used to find the 3D corners of the cube. The relative rotation between the camera-LiDAR system is estimated by solving 2D and 3D correspondences of the cube corners. Lee et al. [6] use a spherical object for extrinsic calibration. First, the 3D center coordinates of the spherical object are estimated using the 3D LiDAR point scanning the surface of the object. The 2D center coordinates of the spherical object are found by a circle detection algorithm in the camera images. Using the 2D and 3D correspondences of the center coordinates, the extrinsic parameters are estimated using the PnP algorithm. Their method is simple and provides stable performance. Mirzaei F. M. et al. propose [9] a method for joint estimation of both the intrinsic parameters of the Lidar and the extrinsic parameter between a LIDAR and a camera. This method is based on existing approaches to solving this nonlinear estimation problem. In these cases, the accuracy of the result depends on a precise initial estimate. This method computes an initial estimate for the intrinsic and extrinsic parameters in two steps. Then, the accuracy of the initial estimates is increased by iteratively minimizing a batch nonlinear least-squares cost function. Instead of using any special calibration board or object, the common chessboard pattern is also used for extrinsic parameter estimation. Pandey et al. [10] proposed a calibration method of using the normal vector of a chessboard. The rotation and translation between a LiDAR and camera are estimated by minimizing an energy function of correspondences between depth and image frames. Sui and Wang [11] proposed to use the 2D edge lines and the 3D normal vector of the chessboard. The boundaries of the chessboard in 2D images are extracted and their 3D line equations are also calculated by the RANSAC of 3D LiDAR data. A single pose of the checkerboard provides one plane correspondence and two boundary line correspondences. Extrinsic parameters between a camera and LiDAR are estimated using these correspondences obtained from a single pose of the chessboard. Huang and Barth [12] propose an extrinsic calibration method using geometric constraints of the views of chessboard from the LiDAR data and camera image. First, the image and distance data at different position and orientation of the chessboard are acquired using a LiDAR and camera system. The rotation and translation are estimated by solving the closed form equation using the normal vector of the chessboard plane and the vector passing along the plane. Finally, these are refined using the maximum likelihood estimation. Velas et al. [13] propose an extrinsic calibration method using the detection of a calibration board containing four circular holes. The coarse calibration of this method is performed using circle centers and the radius of the 3D LiDAR data and 2D image. Then, a fine calibration is performed using the 3D LiDAR data captured along the circle edges and the inverse distance transform of the camera image. Geiger et al. [14] calibrate a stereo camera and a LiDAR in a single shot using multiple chessboards placed across a room. This paper's algorithm requires that all the chessboards' corners' coordinates should be calculated with respect to the stereo camera to distinguish multiple chessboards. To do this, the authors use the stereo camera to reconstruct the 3D coordinates of every chessboard corner. The rotation between the camera and the LiDAR is estimated using the normal vectors between the corresponding 3D LiDAR points and chessboard corners from the camera. The translation is estimated using the point-to-plane distance of 3D centers of the chessboard. Pandey et al. [15] propose a method for extrinsic calibration of a LiDAR and camera system using the maximization of mutual information between surface intensities. Budge et al. [16,17] propose a TOF flash LiDAR and camera method using the chessboard pattern. Their method combines systematic error correction of the flash LiDAR data, correction for lens distortion of the digital camera and flash LiDAR images, and fusion of the LiDAR to the camera data in a single process. In this paper, we propose an external calibration method that is based on the theory proposed in the work of [10]. However, instead of using a single energy function to calibrate extrinsic parameters, we divide the calibration process into three different steps to increase the reliability of the calibration. In our previous paper [18], we have introduced a two-step calibration method and applied the method to reconstruct a 3D road map for an autonomous vehicle. Rotation and translation calibration steps are done independently to get an initial transformation. Then, the initial results are optimized iterative in the third step of the calibration. More details of the proposed method and the differences from those in the literature are shown in the next sections. The contents of this paper are as follows. Section 2 briefly describes a multi-sensor device used for calibration experiments. Section 3 is divided into three subsections. Section 3.1 explains a plane fitting method of the chessboard using the LiDAR sensor data. Section 3.2 explains another plane fitting method of the chessboard with respect to the camera coordinate system using the reprojected 2D chessboard corners. Rotation and translation estimation between a camera-LiDAR unit is described in Section 3.3. Refinement of the initial rotation and translation for more accurate calibration is described in Section 3.4. Sections 4.1 and 4.2 show the experimental results and performance analysis using simulation and real test sequences, respectively. Finally, conclusions are provided in Section 5. An Omnidirectional Camera-LiDAR System A front view of our camera-LiDAR system is shown in Figure 1. The system is originally designed to obtain 360 degree images using multiple cameras and LiDAR data for autonomous navigation of a vehicle. The system can be mounted on top of the vehicle and obtain the color and shape data around the vehicle in real-time. To align the color and shape information from the two different types of sensors, it needs to calibrate the extrinsic parameters between sensors. In this paper, however, we propose a calibration algorithm that finds the external parameters between a single camera and the LiDAR. The proposed method can later be used to calibrate multiple cameras with the LiDAR by sequentially running the proposed algorithm to each camera-LiDAR pair. Sensors 2020, 20, x FOR PEER REVIEW 3 of 17 The contents of this paper are as follows. Section 2 briefly describes a multi-sensor device used for calibration experiments. Section 3 is divided into three subsections. Section 3.1 explains a plane fitting method of the chessboard using the LiDAR sensor data. Section 3.2 explains another plane fitting method of the chessboard with respect to the camera coordinate system using the reprojected 2D chessboard corners. Rotation and translation estimation between a camera-LiDAR unit is described in Section 3.3. Refinement of the initial rotation and translation for more accurate calibration is described in Section 3.4. Sections 4.1 and 4.2 show the experimental results and performance analysis using simulation and real test sequences, respectively. Finally, conclusions are provided in Section 5. An Omnidirectional Camera-LiDAR System A front view of our camera-LiDAR system is shown in Figure 1. The system is originally designed to obtain 360 degree images using multiple cameras and LiDAR data for autonomous navigation of a vehicle. The system can be mounted on top of the vehicle and obtain the color and shape data around the vehicle in real-time. To align the color and shape information from the two different types of sensors, it needs to calibrate the extrinsic parameters between sensors. In this paper, however, we propose a calibration algorithm that finds the external parameters between a single camera and the LiDAR. The proposed method can later be used to calibrate multiple cameras with the LiDAR by sequentially running the proposed algorithm to each camera-LiDAR pair. For color image acquisition, a FLIR Blackfly color camera is used. The field of view of the camera lens is 103.6°, and the resolution is 1280 × 1024 pixels. The camera is mounted on the top of a hexagonal plate. In addition, a 16-channel Velodyne VLP-16 LiDAR sensor is mounted on another plate and fixed above the six cameras. The VLP-16 sensor uses an array of 16 infrared (IR) lasers paired with IR detectors to measure distances to objects. The vertical field of view of the sensor is 30°, and it can acquire distance data at a rate of up to 20 Hz for 360 degree horizontally. Our camera-LiDAR system has a partially overlapping field of view, as shown in Figure 2. For color image acquisition, a FLIR Blackfly color camera is used. The field of view of the camera lens is 103.6 • , and the resolution is 1280 × 1024 pixels. The camera is mounted on the top of a hexagonal plate. In addition, a 16-channel Velodyne VLP-16 LiDAR sensor is mounted on another plate and fixed above the six cameras. The VLP-16 sensor uses an array of 16 infrared (IR) lasers paired with IR detectors to measure distances to objects. The vertical field of view of the sensor is 30 • , and it can acquire distance data at a rate of up to 20 Hz for 360 degree horizontally. Our camera-LiDAR system has a partially overlapping field of view, as shown in Figure 2. Extrinsic Calibration of the Camera-LiDAR System In this section, we describe details of the proposed calibration method step by step. A flowchart of the proposed calibration method for a camera-LiDAR sensor unit is shown in Figure 3. A summary of the proposed method is as follows. First, 2D images and 3D distance data of a chessboard are captured from a camera-LiDAR sensor module. We acquire multiple image and distance data at different poses (position and orientation) of the chessboard. At each chessboard pose, the camera pose with respect to the world coordinate system (chessboard coordinate system) is calibrated by the PnP algorithm. Then, the 3D positions of the chessboard corners are calculated by back-projection of the 2D chessboard corners, and they are used to fit to a 3D plane of the chessboard with respect to the camera coordinate system. For the same chessboard pose, the 3D points scanning on the board are segmented by a simple distance filtering. Then, the 3D plane equation with respect to the LiDAR sensor coordinate system is also calculated. The relative orientation between the camera and the LiDAR is estimated by aligning the normal vectors of all plane pairs. To find the relative translation, we arbitrarily select a 3D point in each rotated camera plane and project the point to another point on the 3D LiDAR plane. The distance between two 3D points in Extrinsic Calibration of the Camera-LiDAR System In this section, we describe details of the proposed calibration method step by step. A flowchart of the proposed calibration method for a camera-LiDAR sensor unit is shown in Figure 3. A summary of the proposed method is as follows. First, 2D images and 3D distance data of a chessboard are captured from a camera-LiDAR sensor module. We acquire multiple image and distance data at different poses (position and orientation) of the chessboard. At each chessboard pose, the camera pose with respect to the world coordinate system (chessboard coordinate system) is calibrated by the PnP algorithm. Then, the 3D positions of the chessboard corners are calculated by back-projection of the 2D chessboard corners, and they are used to fit to a 3D plane of the chessboard with respect to the camera coordinate system. For the same chessboard pose, the 3D points scanning on the board are segmented by a simple distance filtering. Then, the 3D plane equation with respect to the LiDAR sensor coordinate system is also calculated. The relative orientation between the camera and the LiDAR is estimated by aligning the normal vectors of all plane pairs. Sensors 2020, 20, x FOR PEER REVIEW 4 of 17 Figure 2. A diagram of a camera and LiDAR unit on a plate. The fields of view of the camera (blue) and the LiDAR (red) partly overlap each other. Extrinsic Calibration of the Camera-LiDAR System In this section, we describe details of the proposed calibration method step by step. A flowchart of the proposed calibration method for a camera-LiDAR sensor unit is shown in Figure 3. A summary of the proposed method is as follows. First, 2D images and 3D distance data of a chessboard are captured from a camera-LiDAR sensor module. We acquire multiple image and distance data at different poses (position and orientation) of the chessboard. At each chessboard pose, the camera pose with respect to the world coordinate system (chessboard coordinate system) is calibrated by the PnP algorithm. Then, the 3D positions of the chessboard corners are calculated by back-projection of the 2D chessboard corners, and they are used to fit to a 3D plane of the chessboard with respect to the camera coordinate system. For the same chessboard pose, the 3D points scanning on the board are segmented by a simple distance filtering. Then, the 3D plane equation with respect to the LiDAR sensor coordinate system is also calculated. The relative orientation between the camera and the LiDAR is estimated by aligning the normal vectors of all plane pairs. To find the relative translation, we arbitrarily select a 3D point in each rotated camera plane and project the point to another point on the 3D LiDAR plane. The distance between two 3D points in each view is iteratively minimized until all point pairs from all views are properly aligned. Finally, the relative rotation and translation are refined by minimizing the distance between LiDAR planes and all 3D corner points of the chessboard. To find the relative translation, we arbitrarily select a 3D point in each rotated camera plane and project the point to another point on the 3D LiDAR plane. The distance between two 3D points in each view is iteratively minimized until all point pairs from all views are properly aligned. Finally, the relative rotation and translation are refined by minimizing the distance between LiDAR planes and all 3D corner points of the chessboard. 3D Chessboard Fitting Using LiDAR Data One advantage of the proposed calibration method is using only one common chessboard pattern. Therefore, it is very easy to calibrate the extrinsic parameters between a 2D vision sensor and a 3D LiDAR sensor. In addition, the proposed method can also be used to calibrate multiple cameras and a LiDAR sensor if the camera can capture the chessboard pattern simultaneously. The first step of calibration is to capture the images and depth data of the chessboard and to find the 3D plane equations of the board in each sensor coordinate system. Suppose the camera and LiDAR sensors simultaneously capture images and 3D depth data of a certain pose of the chessboard. If some portions of the LiDAR scan data contain the depth of the chessboard, it is straightforward to find the 3D plane equation with respect to the LiDAR coordinate system. As shown in Figure 4a, the 3D points only on the chessboard area can be extracted from the LiDAR data by a simple distance filtering. Then, we use the RANSAC [19] algorithm to fit an accurate 3D plane equation. The best-fit plane of 3D points of the chessboard at the i-th frame is estimated using the inliers determined by the RANSAC algorithm. Here, N is the total number of image frames and m is the number of 3D points in the i-th frame. This RANSAC-based plane fitting algorithm can be described as follows: 1. Randomly selecting three points from 3D points P L i on the chessboard. 2. Calculating the plane equation using the selected three points. 3. Finding inliers using the distance between the fitted plane and all the other 3D points. 4. Repeat above steps until finding the best plane with the highest inlier ratio. In detail, the maximum number of iterations is 1000, and 3D points within 10 mm from the plane are considered as inliers. The plane with an inlier ratio of 90% or more is considered to the best-fit plane. 3D Chessboard Fitting Using LiDAR Data One advantage of the proposed calibration method is using only one common chessboard pattern. Therefore, it is very easy to calibrate the extrinsic parameters between a 2D vision sensor and a 3D LiDAR sensor. In addition, the proposed method can also be used to calibrate multiple cameras and a LiDAR sensor if the camera can capture the chessboard pattern simultaneously. The first step of calibration is to capture the images and depth data of the chessboard and to find the 3D plane equations of the board in each sensor coordinate system. Suppose the camera and LiDAR sensors simultaneously capture images and 3D depth data of a certain pose of the chessboard. If some portions of the LiDAR scan data contain the depth of the chessboard, it is straightforward to find the 3D plane equation with respect to the LiDAR coordinate system. As shown in Figure 4a, the 3D points only on the chessboard area can be extracted from the LiDAR data by a simple distance filtering. Then, we use the RANSAC [19] algorithm to fit an accurate 3D plane equation. The best-fit plane of 3D points = { , , … , } ( ,…, ) of the chessboard at the i-th frame is estimated using the inliers determined by the RANSAC algorithm. Here, N is the total number of image frames and m is the number of 3D points in the i-th frame. This RANSAC-based plane fitting algorithm can be described as follows: 1. Randomly selecting three points from 3D points on the chessboard. 2. Calculating the plane equation using the selected three points. 3. Finding inliers using the distance between the fitted plane and all the other 3D points. 4. Repeat above steps until finding the best plane with the highest inlier ratio. In detail, the maximum number of iterations is 1000, and 3D points within 10 mm from the plane are considered as inliers. The plane with an inlier ratio of 90% or more is considered to the best-fit plane. We can calculate a best-fit plane of the chessboard in the LiDAR coordinate system by employing RANSAC. This plane is used to estimate the relative pose with the camera coordinate system. Figure 4b shows the result of plane fitting using the 3D points of the chessboard. 3D Chessboard Fitting Using Camera Images From a 2D image of the camera, we can find the 3D plane equation of the chessboard if we know the intrinsic parameters of the camera. In this paper, we assume that the intrinsic parameters of the camera are calibrated in advance using a common camera calibration technique such as Zhang's algorithm [20]. In our experiments, we use the intrinsic calibration algorithm implemented in MATLAB. Once we know the intrinsic parameters, the 3D pose of the camera with respect to the We can calculate a best-fit plane of the chessboard in the LiDAR coordinate system by employing RANSAC. This plane is used to estimate the relative pose with the camera coordinate system. Figure 4b shows the result of plane fitting using the 3D points of the chessboard. 3D Chessboard Fitting Using Camera Images From a 2D image of the camera, we can find the 3D plane equation of the chessboard if we know the intrinsic parameters of the camera. In this paper, we assume that the intrinsic parameters of the camera Sensors 2020, 20, 52 6 of 17 are calibrated in advance using a common camera calibration technique such as Zhang's algorithm [20]. In our experiments, we use the intrinsic calibration algorithm implemented in MATLAB. Once we know the intrinsic parameters, the 3D pose of the camera with respect to the chessboard coordinate system is calculated by the well-known PnP algorithm [21] implemented in OpenCV. We cannot use the MATLAB algorithm for camera pose calibration because the algorithm rejects some chessboard images as outliers. This is not allowed for our purpose because we need to use all input pairs of the camera image and LiDAR data. Then, the 2D coordinates of all chessboard corners in an image can be reprojected to the 3D coordinates using the chessboard to camera transformation matrix. Because we have all N views of chessboard images, we can find N sets of 3D chessboard corners. Figure 5a shows an image of the chessboard and detected corner points with subpixel precision. These corner points are reprojected into 3D space, as shown Figure 5b, and all the reprojected points are reconstructed with respect to the camera coordinate system. Then, the 2D coordinates of all chessboard corners in an image can be reprojected to the 3D coordinates using the chessboard to camera transformation matrix. Because we have all N views of chessboard images, we can find N sets of 3D chessboard corners. Figure 5a shows an image of the chessboard and detected corner points with subpixel precision. These corner points are reprojected into 3D space, as shown Figure 5b, and all the reprojected points are reconstructed with respect to the camera coordinate system. (1) and A is decomposed to Σ using SVD (singular value decomposition). Each element of the fourth row of becomes coefficients a, b, c, and d of the plane equation Calculating Initial Transformation between Camera and LiDAR Sensors In the proposed extrinsic calibration, we first estimate the rotation transformation between two sensors, from the camera to the LiDAR coordinate systems. In Figure 6, let and be the plane fitting results of the i-th pose of the chessboard from the camera (Ci) and the LIDAR (Li) sensors, respectively. The white line starting from the plane represents the normal vector of each plane. The initial poses of two planes are shown in Figure 6a. We consider the LiDAR coordinate system as the world coordinate system. The rotation matrix R from the camera to the world coordinate systems can be found, which minimizes the error as follows: Finally, 3D planes with respect to the camera coordinate system are fitted using 3D chessboard corners of each view and the PCA (principal component analysis) algorithm. Assuming that as a set of 3D chessboard corner points obtained from the i-th chessboard pose and p ] is the coordinates of the j-th 3D chessboard corner. A rectangle matrix A composed of elements of P C i is shown in Equation (1) and A is decomposed to UΣV T using SVD (singular value decomposition). Each element of the fourth row of V T becomes coefficients a, b, c, and d of the plane equation ax + by + cz + d = 0. Calculating Initial Transformation between Camera and LiDAR Sensors In the proposed extrinsic calibration, we first estimate the rotation transformation between two sensors, from the camera to the LiDAR coordinate systems. In Figure 6, let ρ C i and ρ L i be the plane fitting results of the i-th pose of the chessboard from the camera (C i ) and the LIDAR (L i ) sensors, respectively. The white line starting from the plane represents the normal vector of each plane. The initial poses of two planes are shown in Figure 6a. We consider the LiDAR coordinate system as the world coordinate system. The rotation matrix R from the camera to the world coordinate systems can be found, which minimizes the error as follows: where T represents the normal vector of the plane ρ L i and represents the normal vector of the plane ρ C i . Because the rotation of the normal vector of ρ C i should match with that of ρ L i , we can find the rotation matrix that minimizes the error in Equation (2). The cross-covariance matrix Σ N C i N L i of the sets N k C i and N k L i is given by the following: The matrix Σ N C i N L i is decomposed to UΣV T using SVD, and the rotation matrix is calculated by Equation (4). If the rotation matrix is calculated correctly, the rotated plane ρ C i must be exactly parallel to ρ L i , as shown in Figure 6b. Then, the translation vector is calculated by minimizing the distance between two planes. Figure 7 shows how to estimate the distance between two planes. First, a 3D In the actual experiments, we select one of the chessboard corners. The selection is random by considering the imperfect rotation estimation. The translation vector t = t x , t y , t z T is calculated by first projecting X C i onto ρ L i and then minimizing the distance between X C i and the projected point X C i . The distance between X C i and X C i in the i-th pose can be written as follows: where the plane equation of ρ L i is given as a L i x + b L i y + c L i z + d L i = 0. X C i is calculated using ε t , X C i , and the normal vector of the LiDAR plane. If the center of projected points X C i is µ C i and the center of points X C i on the chessboard plane is µ C i , then the translation shift ∆t is calculated by Equation (6). The translation t is decided by accumulating the shift vector, and the process is repeated until convergence. Sensors 2020, 20, x FOR PEER REVIEW 7 of 17 The matrix Σ is decomposed to Σ using SVD, and the rotation matrix is calculated by If the rotation matrix is calculated correctly, the rotated plane must be exactly parallel to , as shown in Figure 6b. Then, the translation vector is calculated by minimizing the distance between two planes. Figure 7 shows how to estimate the distance between two planes. First, a 3D point = , , is selected randomly on . In the actual experiments, we select one of the chessboard corners. The selection is random by considering the imperfect rotation estimation. The translation vector = , , is calculated by first projecting onto and then minimizing the distance between and the projected point . The distance between and in the i-th pose can be written as follows: where the plane equation of is given as is calculated using , , and the normal vector of the LiDAR plane. If the center of projected points is and the center of points on the chessboard plane is , then the translation shift is calculated by Equation (6). The translation t is decided by accumulating the shift vector, and the process is repeated until convergence. The translation vector is estimated by minimizing the error in Equation (5) in all poses of the chessboard. Figure 8a shows the initial pose of ρ C i and ρ L i after the initial rotation matrix is applied. Figure 8b shows the alignment between two planes after translation is estimated. There are three degrees of freedom for rotation and translation. To remove any ambiguity in translation and rotation estimations, at least three poses of the chessboard data are needed. We use three or more frames and a corresponding amount of LiDAR depth data. Therefore, the translation parameters should be solved in an iterative way. In the iteration of the linear optimization method, the sampled 3D point X C i is iteratively projected to plane ρ L i until the distance error is minimized. The translation vector is estimated by minimizing the error in Equation (5) in all poses of the chessboard. Figure 8a shows the initial pose of and after the initial rotation matrix is applied. Figure 8b shows the alignment between two planes after translation is estimated. There are three degrees of freedom for rotation and translation. To remove any ambiguity in translation and rotation estimations, at least three poses of the chessboard data are needed. We use three or more frames and a corresponding amount of LiDAR depth data. Therefore, the translation parameters should be solved in an iterative way. In the iteration of the linear optimization method, the sampled 3D point is iteratively projected to plane until the distance error is minimized. Transformation Refinement More accurate transformation between the camera and LiDAR sensors is calculated based on the initial transformation obtained in Section 3.3. The transformation refinement is performed using the plane equation of LiDAR and all corners of the chessboard. The transformation refinement step is as follows. First, suppose is the j-th corner point of the i-th pose of the chessboard, and it is already transformed by the initial transformation. As shown in Equation (7), we define an error function, which is the distance between all corners of the chessboard and the LiDAR planes. Then, the rotation matrix R and the translation vector t between the camera and the LiDAR sensors are refined by minimizing the error in Equation (7). We solve the non-linear optimization problem given in Equation (7) using the Levenberg-Marquardt (LM) algorithm. Transformation Refinement More accurate transformation between the camera and LiDAR sensors is calculated based on the initial transformation obtained in Section 3.3. The transformation refinement is performed using the plane equation of LiDAR and all corners of the chessboard. The transformation refinement step is as follows. First, suppose P j C i is the j-th corner point of the i-th pose of the chessboard, and it is already transformed by the initial transformation. As shown in Equation (7), we define an error function, which is the distance between all corners of the chessboard and the LiDAR planes. Then, the rotation matrix R and the translation vector t between the camera and the LiDAR sensors are refined by minimizing the error in Equation (7). We solve the non-linear optimization problem given in Equation (7) using the Levenberg-Marquardt (LM) algorithm. Error Analysis Using Simulation Data As the first experiments, we analyze the accuracy of the proposed algorithm using simulation data and ground truth. We generate test datasets to simulate the 3D LiDAR-camera system using Blensor Figure 9 shows a screen capture of generating a test sequence in the simulator. We conducted experiments using the simulation data from minimum of 3 frames to a maximum of 30 frames. The rotation and translation errors are measured with respect to the ground truth by Equations (9) and (10), respectively: In Equation (9), trace (X) is the summation of the diagonal elements of a square matrix X and I is the 3 × 3 identity matrix. In each experiment, test frames are randomly selected from 100 total frames and the experiments is repeated 100 times (n = 100). In the above equations, R GT and t GT are the rotation and the translation ground-truth, respectively, between the camera and the LiDAR sensors. R e and t e are estimation results by the proposed algorithm. Figure 10 and Table 1 are the comparison results between ground-truth and estimated extrinsic parameters. We also compared both the initial and the refined extrinsic parameters. The rotation and the translation errors decrease as the number of test frames increases. The refinement results show that calibration error becomes lower compared with the initial error. The mean error of rotation is measured by Equation (9), which measures the absolute difference with the ground truth rotation matrix R TH . In each experiment, test frames are randomly selected from 100 total frames and the experimen repeated 100 times (n = 100). In the above equations, and are the rotation and t nslation ground-truth, respectively, between the camera and the LiDAR sensors. and a imation results by the proposed algorithm. Figure 9. A screen capture of generating a test sequence in Blensor. Figure 10 and Table 1 are the comparison results between ground-truth and estimated extrins ameters. We also compared both the initial and the refined extrinsic parameters. The rotation an translation errors decrease as the number of test frames increases. The refinement results sho t calibration error becomes lower compared with the initial error. The mean error of rotation asured by Equation (9), which measures the absolute difference with the ground truth rotatio trix RTH. Consistency Analysis Using Real Data We conducted several experiments to evaluate the accuracy of the proposed extrinsic calibration method using the omnidirectional sensing system shown in Figure 1. The LiDAR 3D points and images are captured using PCL (point cloud library) and FLIR SDK on Windows OS. Chessboard corners of images are detected using the OpenCV library. The time synchronization between two sensors is not considered because the LiDAR and camera acquire a static scene. In addition, the time interval between two acquisitions is very short. To analyze the accuracy of the proposed calibration method, experiments were done with the combination of the following configurations: • To test in a different field of view, two types of lens are used, that is, a 3.5 mm lens and an 8 mm lens; • To test with a different number of image frames, a total of 61 and 75 frames are obtained from the 3.5 mm and 8 mm lens, respectively. There is no ground truth parameter of the external transformation between the camera and the LiDAR sensors. Instead, we measure the following parameters for accuracy and consistency analysis: • The translation between the coordinate system; • The rotation angle between the coordinate system along the three coordinate axes; • The measurement difference between the results of using the 3.5 mm and 8 mm lenses (for consistency check). Among all captured image frames of the camera and LiDAR, we randomly select 3 to 10 frames for calibration. We repeated the random selection and the calibration 100 times in each number of image frames. Then, the mean and the standard deviation of calibration results are calculated. Figure 11 shows the rotation and translation average between the coordinate systems. In this result, the 3.5 mm lens is used. This result shows that the average rotation and translation of each axis are stable when six and more frames are used. Figure 12 shows the standard deviation of rotation and translation after 100 repetitions of experiments. The standard deviation converges as the number frames increases. Figures 13 and 14 shows the average and standard deviation, respectively, of rotation and translation when the 8 mm lens is used. The average rotation does not change after using more than seven frames. The standard deviation also becomes very low after more than seven frames are used. The standard deviation at five frames becomes a little high owing to a random combination of image frames; however, after using more than six frames, the standard deviation decreases as expected. LiDAR sensors. Instead, we measure the following parameters for accuracy and consistency analysis:  The translation between the coordinate system;  The rotation angle between the coordinate system along the three coordinate axes;  The measurement difference between the results of using the 3.5 mm and 8 mm lenses (for consistency check). x-axis y-axis z-axis (a) Rotation average (b) Translation average Figures 13 and 14 shows the average and standard deviation, respectively, of rotation and translation when the 8 mm lens is used. The average rotation does not change after using more than seven frames. The standard deviation also becomes very low after more than seven frames are used. The standard deviation at five frames becomes a little high owing to a random combination of image frames; however, after using more than six frames, the standard deviation decreases as expected. Figures 13 and 14 shows the average and standard deviation, respectively, of rotation and translation when the 8 mm lens is used. The average rotation does not change after using more than seven frames. The standard deviation also becomes very low after more than seven frames are used. The standard deviation at five frames becomes a little high owing to a random combination of image frames; however, after using more than six frames, the standard deviation decreases as expected. Table 2 shows the result of rotation and translation when using 10 frames. The 10 frames are also randomly selected from the database and this experiment is also done 100 times. In this table, we can see that the average of rotation and translation between the coordinate system is very similar regardless of the focal length of the camera lens. This result shows that the proposed calibration method is accurate and consistent in calibrating the external parameters between the camera and the LiDAR sensors. Table 3 is the result according to the distance between the sensor unit and the chessboard using 10 randomly selected frames. Experiments are performed such that distances between the sensor unit and the chessboard are 2, 3, 4, and 5 m, respectively. The rotation and translation between the sensors in the 2 to 4 m data set have similar averages and low standard deviation regardless of the focal length of the camera lens. However, using the 3.5 mm lens, we cannot Table 2 shows the result of rotation and translation when using 10 frames. The 10 frames are also randomly selected from the database and this experiment is also done 100 times. In this table, we can see that the average of rotation and translation between the coordinate system is very similar regardless of the focal length of the camera lens. This result shows that the proposed calibration method is accurate and consistent in calibrating the external parameters between the camera and the LiDAR sensors. Table 3 is the result according to the distance between the sensor unit and the chessboard using 10 randomly selected frames. Experiments are performed such that distances between the sensor unit and the chessboard are 2, 3, 4, and 5 m, respectively. The rotation and translation between the sensors in the 2 to 4 m data set have similar averages and low standard deviation regardless of the focal length of the camera lens. However, using the 3.5 mm lens, we cannot detect the chessboard corners in the image captured at a distance of 5 m, and thus there is no result Table 2 shows the result of rotation and translation when using 10 frames. The 10 frames are also randomly selected from the database and this experiment is also done 100 times. In this table, we can see that the average of rotation and translation between the coordinate system is very similar regardless of the focal length of the camera lens. This result shows that the proposed calibration method is accurate and consistent in calibrating the external parameters between the camera and the LiDAR sensors. Table 3 is the result according to the distance between the sensor unit and the chessboard using 10 randomly selected frames. Experiments are performed such that distances between the sensor unit and the chessboard are 2, 3, 4, and 5 m, respectively. The rotation and translation between the sensors in the 2 to 4 m data set have similar averages and low standard deviation regardless of the focal length of the camera lens. However, using the 3.5 mm lens, we cannot detect the chessboard corners in the image captured at a distance of 5 m, and thus there is no result in Table 3. In addition, with the 8 mm lens, the chessboard corner detection at 5 m has low accuracy, which results in inconsistent measurement, as shown in Table 3. The result shows that our algorithm has stable performance from distances of 2 to 4 m between the sensor unit and the chessboard. Using "measure difference" in Tables 2 and 3, we show how much the results are consistent even with different lens focal lengths. As shown in Table 3, the calibration results are not reliable if the calibration object distance is over 5 m. We propose to use the proposed method using images and LiDAR data that capture chessboards within a distance of 4 m. Figure 15 is the result of the projection of the 3D LiDAR points to the chessboard area in the image planes. The red dots are the projected 3D points of LiDAR that are scanning the chessboard areas. Conclusions In this paper, we propose an extrinsic calibration method to find the 3D transformation between a vision camera and a Velodyne LiDAR sensors. The method is based on matching multiple 3D planesthat are obtained from multiple poses of a single chessboard with respect to the camera and the LiDAR coordinate systems, respectively. The initial rotation is estimated by aligning the normal vectors of the planes, and the initial translation is estimated by projecting a 3D point on a plane in the camera coordinate system to the plane in the LiDAR coordinate system. The initial rotation and translation are then refined using an iterative method. The resulting relative pose information can provide data fusion from both sensors. The fused data can be used to create a 3D map of the environment for navigation of autonomous vehicles. Conclusions In this paper, we propose an extrinsic calibration method to find the 3D transformation between a vision camera and a Velodyne LiDAR sensors. The method is based on matching multiple 3D planesthat are obtained from multiple poses of a single chessboard with respect to the camera and the LiDAR coordinate systems, respectively. The initial rotation is estimated by aligning the normal vectors of the planes, and the initial translation is estimated by projecting a 3D point on a plane in the camera coordinate system to the plane in the LiDAR coordinate system. The initial rotation and translation are then refined using an iterative method. The resulting relative pose information can provide data fusion from both sensors. The fused data can be used to create a 3D map of the environment for navigation of autonomous vehicles.
v3-fos-license
2017-07-07T23:38:20.293Z
2017-10-01T00:00:00.000
21592207
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.ajnr.org/content/ajnr/38/10/1850.full.pdf", "pdf_hash": "599246877e82bcba5d5e130992eef6e05674ba18", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41785", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "sha1": "599246877e82bcba5d5e130992eef6e05674ba18", "year": 2017 }
pes2o/s2orc
Neuroimaging Changes in Menkes Disease, Part 1 SUMMARY: Menkes disease is a rare multisystem X-linked disorder of copper metabolism. Despite an early, severe, and progressive neurologic involvement, our knowledge of brain involvement remains unsatisfactory. The first part of this retrospective and review MR imaging study aims to define the frequency rate, timing, imaging features, and evolution of intracranial vascular and white matter changes. According to our analysis, striking but also poorly evolutive vascular abnormalities characterize the very early phases of disease. After the first months, myelination delay becomes evident, often in association with protean focal white matter lesions, some of which reveal an age-specific brain vulnerability. In later phases of the disease, concomitant progressive neurodegeneration might hinder the myelination progression. The currently enriched knowledge of neuroradiologic finding evolution provides valuable clues for early diagnosis, identifies possible MR imaging biomarkers of new treatment efficacy, and improves our comprehension of possible mechanisms of brain injury in Menkes disease. Nonetheless, due to the rarity of the disease, imaging findings are also scarce, and our knowledge is based on scattered case reports and small case series. Increased intracranial vessel tortuosity, protean white matter signal abnormalities, transient temporal lobe changes, cerebral and cerebellar atrophy, basal ganglia anomalies, and subdural collections have been variably reported and associated with the clinical phenotype, leading to several and sometimes conflicting pathogenic hypotheses. So far, the frequency rate, precise characterization, timing, evolution, and likely pathogenesis of these neuroradiologic abnormalities remain unsatisfactorily understood. The present large retrospective and review MR imaging study aimed to improve our knowledge of the intracranial involvement in Menkes disease to provide possible neuroradiologic biomarkers useful for diagnosis and follow-up. In particular, this first Part will address the intracranial vascular and white matter changes that might be observed during the course of MD. MATERIALS AND METHODS MR imaging and MRA findings of children with MD were retrospectively evaluated. Children were enrolled if they had a biochemically or genetically confirmed MD diagnosis and at least 1 MR imaging. All images were evaluated by 2 neuroradiologists with Ͼ15 years of experience in pediatric neuroradiology (R. Manara and D.L.), who were aware of the diagnosis but blinded to the clinical findings; discordant findings were discussed until consensus was reached. Intracranial Blood Vessel Evaluation The qualitative evaluation of increased artery tortuosity was obtained from MRA or parenchymal T2-weighted images whenever angiography studies were not available. A semiquantitative evaluation of basilar artery dolichoectasia (Smoker score, see the On-line Appendix) was performed on axial T2 imaging. A quantitative evaluation of the basilar artery tortuosity was performed with commercially available software (syngo MultiModality Workplace; Siemens, Erlangen, Germany; see the On-line Appendix) in children with MD, who had time-of-flight 3D-MRA in a DICOM format (15 children; mean age, 14 Ϯ 19 months; range, 2.2-86.7 months). Briefly, after recognizing the proximal and the distal end of the basilar artery, the software allowed the measurement in millimeters of the effective length of the whole vessel (curved length) and the linear distance between the 2 extremes. A tortuosity index was subsequently calculated according to the formula: (Curved Length/ Linear Length) Ϫ 1. The index is supposed to be zero if the 2 lengths are equal. Basilar artery semiquantitative and quantitative evaluations were compared with those of 29 children without MD (Table 1) who underwent brain MR imaging and MRA for other indications. The presence of stenosis or ectasia of the intracranial arteries was also evaluated on MRA studies, while the presence of venous sinus ectasia was assessed on parenchymal imaging. White Matter Change Evaluation Qualitative parenchymal evaluation was performed on all MRIs. Tumefactive lesions, centrum semiovale DWI hyperintense lesions, focal nontumefactive white matter lesions, and abnormal myelination, identified in previous publications, were recorded. Literature Review An extensive search was performed in all major data bases (Embase, Scopus, PubMed, Cochrane, and also www.google.com) with the following terms: "Menkes" and "brain MR imaging." We included all articles reporting MR neuroimaging findings and age at MR imaging examination. From case descriptions or direct inspection of the published images, we investigated the presence/ absence of abnormal findings on MR imaging/MRA. The lack of description combined with the impossibility of defining their presence/absence with the available published images was recorded as "not mentioned." We also recorded any available information about the size, distribution, signal pattern, evolution, and proposed pathogenic hypothesis of white matter abnormalities described in children with MD. Our literature review included 47 articles published between 1989 and August 2016 that are listed in the On-line Appendix. Statistical Analysis The variables with normal distribution were analyzed by using the Student t test, while for ordinal variables, the Mann-Whitney U test was used. Categoric variables were analyzed by using the 2 of the Pearson or the Fisher Exact test when required. The significance level was P Ͻ .05. Table 2 summarizes the main features of our sample and of the literature data regarding children with MD who underwent brain MR imaging. RESULTS In our sample, 15 children with MD were alive at the time of the study; for the remaining 11 children, death occurred at 6.3 Ϯ 4.6 years of age (range, 9 months to 17.5 years). All children showed severe early-onset neurodevelopmental impairment; most patients presented with epilepsy during the early phases of MD. The On-line Intracranial Vessel Abnormalities Literature Review. At the first MR imaging, increased artery tortuosity was present in 45/62 children with MD (73%), absent in 2, and not mentioned in 15. At follow-up, increased artery tortuosity was described in 14/23 (61%; in 4, the finding was not mentioned at the first examination), while in 9, it was not mentioned; no case of tortuosity evolution was reported. Our Sample. Increased artery tortuosity was detected in all children at any age. The finding was evident both on T2 parenchymal imaging and MRA (On-line Fig 1). Both semiquantitative and quantitative basilar artery evaluation showed significantly increased tortuosity compared with age-matched controls (P Ͻ .0001; Table 1 and Fig 2) and no correlation with age. Children with MD with neuroradiologic follow-up showed no significant evolution of the basilar artery changes (Table 3). Stenosis and ectasia of the intracranial arteries, though repeatedly mentioned in the literature as possible pathogenic mechanisms of parenchymal lesions, 7,8 were never detected in any of the present children with MD or in those reported in the literature. Widening of the venous sinuses, recently reported in 1 child with MD, 7 was detected in 1/26 children in the present sample (On-line Fig 2A). White Matter Changes: Tumefactive Lesions Literature Review. At first MR imaging, white matter tumefactive lesions were detected in 21/62 children with MD (33%; mean age, 4.8 Ϯ 1.9 months; range, 1.2-10 months); in 3/62 (5%), the lesions were absent (mean age, 4.5 months; range, 3.5-6 months), whereas in the remaining 38/62, this finding was not mentioned. In 5/21 cases, tumefactive lesions were not restricted to the temporal regions. At follow-up MR im- Fig 3) and in 2 additional children with MD at follow-up MR imaging (both at 3.6 months of age). Among children with MD examined in the time window of 3-8 months, the detection rate was 9/16 (56%). In addition, among the 9 children with MD with tumefactive lesions, 2 had follow-up MR imaging and both showed full regression of the lesions. The presence of recurrent seizures or status epilepticus in the month preceding MR imaging was not associated with the detection of tumefactive lesions (6/9 versus 18/31 MRIs, nonsignificant, Fisher exact test). Temporal involvement was detected in all children with MD with tumefactive lesions (unilaterally in 1/9) and spread toward other lobes in 6/9 children (patients 5, 7, 9, 10, 16, and 26); the extratemporal involvement was more frequent than that reported in the literature in children with MD (67% versus 23%, P ϭ .04, Fisher exact test). Tumefactive lesions were distinctly asymmetric in 3/9. All tumefactive lesions had strikingly T2-hyperintense subcortical white matter that appeared iso-hypointense on DWI with increased apparent diffusion coefficient values; the cortical ribbon did not appear involved despite sulcal effacement. WM Changes: Centrum Semiovale DWI Hyperintense Lesions Literature Review. Bilateral drop-shaped centrum semiovale lesions have been reported in 1/62 children with MD (1.6%) at 10 months of age; the lesions, hyperintense in DWI with restriction of water molecule movement, showed a size increase at 13-month follow-up. Our Sample. Bilateral centrum semiovale lesions with identical signal features were detected in an 8-month-old child (3.9%, patient 6, On-line Fig 2B). WM Changes: Focal Nontumefactive White Matter Lesions Literature Review. Focal white matter lesions other than the tumefactive and DWI-hyperintense ones described above were detected at first MR imaging in 14/62 children with MD (23%; mean age, 7.2 Ϯ 3.6 months; range 3-17 months) and were not mentioned in the remaining 48/62 children. At follow-up MR imaging, white matter focal lesions were present in 3/23 children with MD (13% not mentioned in previous examinations), absent in 2/23 (absent and not mentioned in previous examinations in 1 case each), and not mentioned in 18/23 (notably, 3 of them had lesions at first MR imaging, while in 15, the finding had not been mentioned). Our Sample. Focal white matter lesions were detected in 9/26 children with MD (35%; mean age, 14.1 Ϯ 11.0 months; range 4.6 -32.7 months). Among the 8 patients with MR follow-up, 5 still had no lesions, 1 showed lesion regression, and 2 presented with new lesions. Abnormal Myelination Literature Review. Abnormal myelination was reported at first MR imaging in 20/62 children with MD (32%; mean age, 6.6 Ϯ 3.9 months) and was not mentioned in the remaining 42/62 children. At follow-up, this finding was detected in 9/23 (39%) (in 4/9, it was not mentioned at previous MR imaging) and was not described in the remaining cases. Our Sample. Abnormal myelination was found in 19/26 (73%; mean age, 8.7 Ϯ 6.5 months; On-line Fig 4). Normal myelination was more frequently detected among earlier MRIs (especially younger than 6 months of age). In 1 child, the abnormal myelination appeared at follow-up at 15.6 months of age (the first MR imaging performed at 4.8 months showed normal myelination for age). Cases with longer follow-up MR imaging showed a progres- sion of the myelination during the disease course (Fig 3). In the girl with MD, the myelination became normal at the last MR imaging (86.7 months of age). Further Statistical Analyses Possible associations among clinical data and parenchymal or vascular abnormalities were investigated, revealing a weak association between increased basilar artery tortuosity and white matter lesions, different from tumefactive lesions (0.56 Ϯ 0.35 versus 0.24 Ϯ 0.12, P ϭ .02), and a trend toward an association between earlier epilepsy onset and tumefactive lesions (4.9 Ϯ 2.5 versus 7.6 Ϯ 3.7, P ϭ .07) and between later clinical onset and white matter lesions, different from tumefactive lesions (4.9 Ϯ 2.3 versus 3.2 Ϯ 2.5 months, P ϭ .09). No further significant associations were found among clinical and neuroradiologic findings (eg, tumefactive lesions versus status epilepticus or refractory seizures in the month preceding MR imaging examination, tumefactive lesions versus basal ganglia abnormalities, increased vascular tortuosity severity versus inguinal hernia, a marker of connective tissue laxity, or basal ganglia lesions; P Ͼ .1 in all cases). DISCUSSION The present retrospective cross-sectional multicenter study analyzed the neuroradiologic MR imaging abnormalities in children affected by Menkes disease, providing a detailed description of intracranial vascular and parenchymal changes in this rare disease. In this Part, we will address the most remarkable intracranial vessel and white matter MR imaging findings in our sample (26 children with MD) and the available literature (62 children with MD). Intracranial Vascular Changes Increased artery tortuosity is considered a typical diagnostic feature of Menkes disease, being reported in nearly 75% of cases and absent in Ͻ4% of cases. Nonetheless, it is still unclear whether this feature is progressive, eventually leading to blood supply abnormalities; stable since birth; or detectable even at a fetal stage, thus representing a useful prenatal diagnostic disease marker. Indeed, in the MD literature, abnormal intracranial artery tortuosity has sometimes been described only on follow-up examinations, thus suggesting a possible progression of artery changes. However, these case reports lacked a careful description of the anatomic conformation of the intracranial arteries at disease onset, leaving some uncertainty as to whether the tortuosity had truly appeared at follow-up or had been overlooked at disease onset. In our sample, an increased tortuosity was a constant age-independent MD marker, thus confirming its important diagnostic role in the very first months of life. The youngest child with MD (patient 26), scanned in the neonatal period (20 days of life) for cutis laxa before neurologic onset, already showed the typical vessel changes. On the contrary, the lack of intracranial increased arterial tortuosity seems to make MD rather unlikely and should prompt reconsidering other diagnoses. An increased tortuosity of the intracranial arteries was confirmed in our sample by both semiquantitative and quantitative basilar artery evaluations, which showed significant changes compared with age-matched subjects. Considering that the control group included children undergoing MR imaging/MRA for the suspicion of intracranial vascular problems, the difference is even more meaningful and confirms the major role of intracranial artery evaluation in the diagnostic work-up of MD. Regarding the evolution of the intracranial artery changes, the lack of association between age and both the tortuosity index and Smoker score at the cross-sectional evaluation and the lack of significant changes in children with MD undergoing MR imaging follow-up provide some evidence of early, most likely late-fetal or early-postnatal, vessel wall damage that remains relatively stable during the subsequent disease course. Intracranial artery elongation has been related to decreased activity of the copper-dependent lysyl oxidase, which is involved in elastin and collagen cross-linking, resulting in structural impairment of the blood vessel wall. [9][10][11][12][13][14][15] Because during the whole intrauterine life, the circulating copper is provided by the mother, early vascular wall impairment points to a key role of ATP7A for lysyl oxidase function within the cell. 16 The absence of intracranial artery ectasia and stenosis, either at onset or at follow-up, is another important result. Previous studies have reported possible ectasia or distal narrowing of intracranial arteries. 7,8 Even though no imaging study directly has yet shown significant vessel lumen abnormalities, artery narrowing has been repeatedly advocated as a likely cause of ischemia and encephalomalacia. Actually, according to our experience (Online Fig 1C), the apparent distal artery caliper change was the result of artifacts related to the acquisition sequence used for investigating intracranial arteries. In fact, the commonly used MRA sequence (TOF) applies a saturation band just above the acquisition slab, aiming to cancel the cranial-caudal (venous) blood flow. The increased tortuosity of intracranial arteries can easily result in artery segments with downward-directed blood flow; the consequent artifactual signal loss might eventually lead to the false detection of stenosis and subsequent ectasia as the blood flow becomes again caudal-cranial. Indeed, the rare pathologic studies seem to confirm our neuroimaging findings, showing mild intima hypertrophy without significant abnormalities of vessel caliper and patency. 6 Recently, dural sinus ectasia has been reported in a child with MD and has been suggested as a potentially interesting new disease marker. 8 Indeed, this feature has also been detected in 1 child of our sample. Nonetheless, its low detection rate (1/26, 4%) raises some doubts about the real impact of venous changes and the utility of a dedicated evaluation of the intracranial venous system in the suspicion of MD. Parenchymal Abnormalities White Matter Tumefactive Lesions. In previous publications, [17][18][19] tumefactive white matter lesions have been variably named "white matter cystic changes," 19 "leukoencephalopathy," 20 and "transient edema of the temporal lobes." 21 All these lesions shared the following MR imaging features: The affected white matter presented striking T2-hyperintensity and increased ADC values consistent with vasogenic edema, the cortical ribbon was relatively spared, and the lesions had mild/moderate mass effect. Because data were derived almost exclusively from case reports, the frequency rate of tumefactive lesions in MD has not yet been defined. The present literature analysis points to a frequency rate slightly above one-third of children with MD, similar to that observed in our sample (37% versus 34%, respectively), thus suggesting that tumefactive lesions are usually not overlooked in routine conventional MRIs. Tumefactive lesions may be uni-or bilateral, symmetric or asymmetric; most lesions are localized in the temporal lobes, disclosing the high vulnerability of these regions, especially the anterior portion (temporal pole). Nonetheless, involvement of other cerebral lobes is not uncommon: An extratemporal involvement was reported in nearly one-quarter of the children with MD with tumefactive lesions in the literature. In our sample, the detection of extratemporal lesions rose to two-thirds, most probably not because of differences between the 2 study populations but because of the different, more systematic approach at imaging evaluation. One of the most interesting results of the present study is the identification of a specific age window for the appearance of tumefactive lesions. According to the literature review, children with MD presented with tumefactive lesions solely within the age range of 1.2-10 months (3-8 months in our sample). The recognition of a specific age window for tumefactive lesions implies the existence of an age-dependent brain vulnerability in MD that does not include the very early postnatal period and does not persist beyond the tenth month of life. Besides, the lack of detection of tumefactive lesions in children with MD undergoing MR imaging after 10 months of age suggests an evolution of those lesions that were detected in Ͼ50% of MRIs during the age window of 3-8 months. Indeed, the literature review, 18,21 as well as 2 children with MD in our sample (patients 22 and 23), showed the reversibility of tumefactive lesions at follow-up MR imaging, raising some questions about their nature and pathogenesis. In the past, some authors 4,17,20,22 hypothesized for tumefactive lesions an ischemic pathogenesis that is not consistent with the currently available neuroradiologic findings because of the following: 1) Tumefactive lesions involve almost exclusively the white matter, sparing the contiguous cortex, while the latter is usually more vulnerable to ischemia; 2) DWI does not reveal cytotoxic edema, which, instead, characterizes ischemia; 3) tumefactive lesions do not respect vascular territories; and 4) tumefactive lesions seem to be reversible, while ischemic lesions are mostly irreversible and typically result in focal encephalomalacia. Other authors suggested the existence of a pathogenic relationship between tumefactive lesions and refractory seizures or status epilepticus. 17,18,20 A protracted epileptic activity might result in severe parenchymal edema, both vasogenic and cyto-toxic. 17 Vasogenic edema primarily derives from the persistent and severe brain acidosis associated with status epilepticus, eventually leading to blood-brain barrier alteration, parenchymal swelling, and sulcal effacement, while cytotoxic edema is likely due to the excessive neuronal stimulation with alteration of cell membrane permeability and neuronal/glial swelling. Postcritical cytotoxic edema usually prevails in the cortex, 17 might spread to ipsilateral pulvinar and hippocampal formation, and has no or mild mass effect. Actually, tumefactive lesions did not show cytotoxic-like features in any children in the literature or our sample children with MD. In addition, in our relatively large sample, we were not able to replicate the hypothesized association between tumefactive lesions and the presence of status epilepticus or refractory seizures in the few days before MR imaging, even while restricting the analysis to within the temporal window in which tumefactive lesions have been detected (3-8 months). Taken together, these data do not support the hypothesis of a causal relationship between tumefactive lesions and excessive seizure activity. Most likely, tumefactive lesions and status epilepticus are simply independent epiphenomena of the same copper-related metabolic dysfunction. DWI Hyperintense Centrum Semiovale Lesions. Although found in only 2 children with MD, considering both the literature and our sample, white matter cytotoxic-like lesions merit discussion due to their very peculiar MR imaging features. In both children with MD, these lesions had an oval or drop-like shape (On-line Fig 2B), were well recognizable on T2 images, but different from tumefactive lesions; they were located in the deep white matter and were strikingly DWI hyperintense with decreased ADC values. In the sole child with MD with follow-up MR imaging after 3 months, the lesions increased in size, maintaining a prevalent cytotoxic edema-like appearance. The pathogenesis of these lesions is still uncertain; the prolonged persistence of cytotoxic edema is not consistent with a brain infarct because the latter loses DWI hyperintensity within 1 month, evolving into focal encephalomalacia; in addition, the affected regions, both at onset and followup, did not correspond to a specific vascular territory. Most likely, centrum semiovale lesions are due to copper-dependent metabolic processes, 22 leading to compromised mitochondrial function and eventually to a metabolic, not ischemic, stroke. The very low frequency of DWI hyperintense lesions of the centrum semiovale in children with MD suggests the coexistence of environmental or genetic pathogenic factors, but it also hinders their identification. One of the 2 children with centrum semiovale lesions, for example, had a rare deletion of the exon 21, while in the other child, the genetics were not specified. Focal Nontumefactive White Matter Lesions. According to our literature review, focal nontumefactive white matter lesions are reported in about one-fourth of children with MD, while the analysis of our sample showed a frequency rate slightly above 40%. Once again, the difference rate is most likely due to a more systematic search in our sample rather than to a true lower rate of focal nontumefactive white matter lesions in the literature data. These white matter lesions appear as T2/FLAIR-hyperintense/ DWI-isointense regions, do not show a precise or restricted temporal occurrence, and present no specific morphologic pattern (all lobes were variably involved in our sample). From the literature review, it was not possible to gain useful information about lesion evolution because follow-up data were lacking. In our sample, focal nontumefactive white matter lesions had a heterogeneous and unpredictable course, with lesions progressing during the disease course in most cases but also with 1 child who showed lesion regression at follow-up. According to some authors, focal white matter lesions stem from the progressive and diffuse cortical neurodegeneration and represent a secondary white matter degeneration. 23 However, this hypothesis does not explain why the lesions are focal, while neurodegeneration is more often a global phenomenon and focal cortical lesions are absent or present with regional inconsistency. In addition, our study did not reveal any association between these lesions and the presence or the severity of brain atrophy. According to other authors, these lesions would result from ischemic phenomena caused by vascular anomalies. 1,4,7,8,19,22,24 The association between intracranial artery tortuosity and nontumefactive white matter lesions found in the present study seems to support a possible link between the processes leading to vascular wall abnormalities and white matter involvement. Nonetheless, because tortuosity changes do not seem to be associated with artery lumen changes, the ischemic hypothesis remains aleatory; moreover, focal white matter lesions do not cluster in specific vascular territories, and both the literature review and our sample analysis did not detect any lesion with DWI features of acute ischemia. An alternative hypothesis suggested a relationship between white matter changes and the metabolic energetic failure due to copperdependent enzymes. 17 Similar white matter abnormalities have been observed in mitochondriopathies caused by deficiency of the cytochrome C oxidase enzyme, 17 which is also impaired in MD and might result in brain cell death and demyelination. The present study was not powered for addressing this issue properly, but the extreme phenotypic variability of white matter lesions suggests that several pathogenic mechanisms are likely involved, leading to progressive white matter deterioration. Abnormal Myelination. Abnormal myelination of the supratentorial white matter is a well-known finding in MD. It is present in about one-third of the literature MRIs and in more than twothirds of our sample. Discrepancies in the detection rate most likely reside in the challenging discrimination between abnormal myelination both in early and in late phases of the disease course. In fact, brain myelination is physiologically incomplete in the first years of life, and its evaluation requires some expertise in pediatric neuroradiology. On the other hand, advanced phases of MD might present with superimposing neurodegenerative processes that imply diffuse gliosis/Wallerian degeneration with concomitant white matter signal abnormalities. Notably, most children with MD have normal myelination at birth, suggesting that the lack of ATP7A function does not significantly influence myelin formation during fetal development, in contrast to what happens regarding vascular tortuosity that is abnormal even despite maternal-mediated copper absorption. Abnormal myelination is therefore diagnosed when the expected myelination milestones are not reached (eg, the anterior limb of the internal capsule at 6 months of age) and might become more evident with increasing age. Whether myelination abnormalities were due to a halt or a delay in the myelination process, it appears unequivocally disentangled by the longitudinal MR imaging evaluation that, in our sample, repeatedly showed myelination progression (Fig 3) or even myelin full maturation at 7 years of age, though the latter child with MD was the female subject (patient 13), likely with a milder form of MD. CONCLUSIONS Intracranial vascular and white matter findings appear strikingly heterogeneous in Menkes disease. According to our data and literature review, a significantly increased arterial tortuosity seems to represent an early and reliable diagnostic biomarker of Menkes disease. On the other hand, there is no evidence of significant vessel wall change evolution during the disease course, and the role of artery changes in the pathogenesis of brain injury appears to be very weak, if present at all. Regarding the white matter involvement, myelin abnormalities seem to result from different, sometimes concomitant pathogenic mechanisms. Besides a delayed-but-improving postnatal myelination process, neurodegenerative phenomena implying gliosis and Wallerian degeneration might variably influence myelin signal. In addition, several heterogeneous focal lesions might occur, some of which seem to reveal a temporally selective white matter vulnerability during the course of Menkes disease.
v3-fos-license
2021-09-01T15:11:47.644Z
2021-06-22T00:00:00.000
237916115
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.mdpi.com/2076-3417/11/13/5794/pdf?version=1624525941", "pdf_hash": "c60187a56578d0271d571a82c3b8b519f5fa3415", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41786", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "018f90967f2a2ec54389c07e5851cbf647d64205", "year": 2021 }
pes2o/s2orc
Layer-by-Layer Assembled Nano-Composite Multilayer Gas Barrier Film Manufactured with Stretchable Substrate : Most gas barrier films produce cracks that lead to a significant loss of gas barrier integrity when strain is applied. In order to fabricate stretchable gas barrier films with low water permeability and high endurance after stretching, we used polydiallydimethylammonium (PDDA) mixed with graphene oxide (GO) and poly (vinyl alcohol) (PVA) mixed with montmorillonite (MMT). These films were manufactured by layer-by-layer assembly on an Ecoflex/polydimethylsiloxane (PDMS) substrate with pre-strain applied. A total of 30 layers of PDDA (GO)/PVA (MMT) coated on the substrate exhibited a low water vapor transmission rate of 2.5 × 10 − 2 g/m 2 day after 100 cycles of stretching (30% strain). In addition, they exhibited a high light transmittance of 86.54%. Thus, the prepared stretchable gas barrier film has potential applications as a barrier film in transparent and stretchable electronic devices. Introduction As the demand for flexible and stretchable electronic devices increases, flexible and stretchable gas barrier films are in great need to prevent moisture from entering devices to increase the lifetime and stability of electronics and display devices [1][2][3]. However, conventional gas barrier films are manufactured using an inorganic film, where cracks or pinholes are generated when the gas barrier film is bent or stretched, and water can penetrate through these defects [4]. Additionally, inorganic layers have been fabricated by vacuum processes. However, vacuum processing has the disadvantages of low production efficiency and high production costs [5]. To solve this problem, we fabricated a stretchable gas barrier film using nano-materials and polymers in place of inorganic layers. Instead of a vacuum process for depositing inorganic layers [6][7][8], a layer-by-layer (LBL) assembly method based on solution processing via electrostatic attraction was used to deposit the passivation layer [9][10][11]. As nano-materials to lengthen the moisture permeation path of water, graphene oxide (GO) and montmorillonite (MMT) were used. Since GO exhibits a large aspect ratio and MMT is a plate-like material, they were suitable for gas barrier films [12][13][14][15]. In addition, the transmittance is an important factor in our gas barrier film because a high transmittance is essential for the film to be applied to construct a display [16]. Therefore, in our paper, GO and MMT were used to form a flexible gas barrier film for improved transmittance and a decrease in the water vapor transmission rate. Furthermore, polydiallydimethylammonium (PDDA) and poly (vinyl alcohol) (PVA) were used as suitable polymers to be mixed with GO and MMT. Polydimethylsiloxane (PDMS) and the Ecoflex series (here, Ecoflex 00-30) are commonly used stretchable substrates. However, each has specific strengths and weaknesses. PDMS exhibits low stretchability and high transmittance, rendering it suitable for transparent applications, whereas Ecoflex exhibits high stretchability and opaque characteristics. Both materials are types of silicone rubber, which means that they are miscible [17][18][19]. Thus, we attempted to mix PDMS and Ecoflex as a stretchable gas barrier film substrate. When a passivation layer was laminated on the stretchable substrate, pre-strain was applied and laminated to eliminate spaces between the nano-clays when the substrate was stretched. The water vapor transmission rate (WVTR) characteristics and surfaces of the substrates fabricated after applying 10% and 30% pre-strain and without applying prestrain were compared. The oxygen transmission rate (OTR) was measured via MOCON, and the WVTR of the stretchable gas barrier film was measured via the Ca test method using the electrical properties of calcium [20,21]. The fabricated stretchable gas barrier film was confirmed to be suitable for use in stretchable electronic devices. Materials and Methods GO (500 mg/L) was purchased from the Graphene Supermarket, and Na+-MMT was purchased from Southern Clay Products, Gonzales, TX, USA. PVA (Mw = 30,000-70,000, 87-90% hydrolysed) and PDDA (Mw = 200,000-350,000, 20 wt% H 2 O) were purchased from Sigma-Aldrich (Seoul, Korea). The substrate was prepared using a mixture of PDMS and Ecoflex 00-30 at a weight ratio of 8:2. PDMS was purchased from the Dow Corning, and the PDMS base and curing agent were mixed at a weight ratio of 10:1. Ecoflex was purchased from Smooth-on, and the Ecoflex base and curing agent were mixed at a weight ratio of 1:1. The mixture of PDMS and Ecoflex 00-30 was cured at 60 • C for 5 h. Subsequently, 0.01 wt% GO was magnetically stirred (450-550 rpm) in 200 mL of DI water for 24 h to achieve a uniform dispersion. To prepare a cationic solution, the GO solution was mixed with 0.02 wt% PDDA in DI water for 24 h and stirred for an additional 24 h. The PDDA (GO) solution combined the functional groups on the GO surface and PDDA via electrostatic attractions, resulting in a positive charge on the surface of the resulting material [22,23]. To obtain a uniform MMT particle size, 0.05 wt% MMT was magnetically stirred (450-550 rpm) in 400 mL DI water for 24 h and centrifuged at 4500 rpm for 1 h. Afterwards, the solution containing the large MMT particles was extracted and centrifuged at 1700 rpm for 15 min to obtain a solution with uniform-size MMT particles of 2 to 4 µm. The prepared MMT solution was mixed with a 0.5 wt% PVA solution and then stirred at 85 • C for 24 h at a ratio of 3:1, and the mixture was additionally stirred at room temperature for 24 h. This solution contained a negative charge on the surface [24]. The prepared mixed stretchable substrate was treated with ultraviolet (UV) ozone for 30 min to create OH-radicals on the surface [25]. The stretchable gas barrier film was subsequently fabricated by LbL assembly between the cationic solution of PDDA (GO) and anionic solution of PVA (MMT). The calcium test method was used to analyze the WVTR characteristics of the fabricated stretchable substrate, and the MOCON OX-TRAN 2/21 MH (as specified in ASTM D-3985) analysis method at 23 • C and 50% RH was used for OTR analysis. The WVTR and OTR values of the gas barrier film were measured using the property that the resistance of Ca changes due to its role as an insulator when Ca reacts with water and oxygen, decreasing the current under a constant applied voltage. The Ca test method can accurately measure ultra-low water vapor permeability on the order of 10 −6 g/m 2 day with high sensitivity and structural compatibility. A zeta potential analyzer (ELSZ-1000, Otsuka Electronics, Osaka, Japan) was used to characterize the electrostatic attraction between the prepared PDDA (GO) and PVA (MMT) solutions. Scanning electron microscopy (SEM) was used for the surface analysis of the stretchable gas barrier film. For transmittance analysis of the film, an ultraviolet-visible (UV-Vis) spectrophotometer (Cary 5000, Varian Instruments, Palo Alto, CA, USA) was used in the range of 400 to 800 nm. Preparing Stretchable Substrate Using PDMS as a stretchable substrate results in high transmittance, but limited stretchability, whereas Ecoflex substrates provide better stretchability than PDMS but have very low transmittance. To overcome these challenges, an optimum ratio between the trade-off relation of transmittance and stretchability was determined by testing mixtures of Ecoflex/PDMS as a substrate at a weight ratio of 8:2 [26]. Figure 1 shows that at 550 nm, the transmittance values of PDMS, Ecoflex, and the mixed Ecoflex/PDMS substrates are 100%, 46.45%, and 86.54%, respectively. The PDMS and Ecoflex mixture at a weight ratio of 8:2 was confirmed to be suitable for use as a stretchable substrate with sufficient transmittance. OH-radicals were generated on the substrate surface by UV-O surface treatment for 30 min on the mixed PDMS and Ecoflex sample. The thickness of the fabricated substrate is 300 µm. Analysis Using Zeta Potential and Stretchable Gas Barrier Film Fabrication Using a zeta potential analyzer (ELSZ-1000, Otsuka Electronics), the surface charges of the solutions used for the LbL deposition were determined. As shown in Figure 2, the GO solution exhibited a zeta potential value of −33.6 before mixing with PDDA, but after mixing with PDDA at a ratio of 1:2, the PDDA (GO) solution showed a positive charge of 34.95. The MMT solution exhibited a zeta potential value of −0.54 before mixing with PVA, which changed to a negative charge of −8.4 after mixing at a ratio of 3:1 with PVA. When MMT was mixed with PVA, the PVA permeated between MMT and bonded with the MMT to increase the diffusion path of water. As confirmed by the zeta potential analysis, the PDDA (GO) solution exhibited a positive charge, and the PVA (MMT) solution was negatively charged, meaning each layer can be stacked alternately by electrostatic interactions. Herein, 30 alternating layers of PDDA (GO) and PVA (MMT) were stacked on the stretchable substrate. The thickness of the fabricated layer by the LbL process is about 150 nm. The LbL process, the solution process used in this paper, has the advantage of being able to be utilized in a large-area lamination because it stacks each layer by electrostatic forces. In addition, after drop casting, it is then rinsed with DI water and then laminated to a thickness on the nano-scale. The drop casting method is more suitable than spin coating for the fabrication of a uniform multilayer film while maintaining pre-strain on the substrate. Analysis Using FTIR and SEM The result from FTIR analysis (LabRam ARAMIS IR2, HORIBA JOBIN YVON) confirmed the incorporation between the GO and MMT. The FTIR pattern of GO in Figure Figure 4, to fabricate a barrier layer on the stretchable substrate without applying pre-strain [27]. The stretchable gas barrier film was stretched and relaxed. The surfaces of the stretchable gas barrier films fabricated via drop casting without pre-strain and after applying 10% and 30% pre-strain to the stretchable substrate were subsequently analyzed by SEM. Figure 5 shows SEM top-view images of the sample after 30 layers of PDDA (GO)/PVA (MMT) were prepared by LbL deposition at different strains (0%, 10%, and 30%). PDDA (GO) and PVA (MMT) were well deposited on the mixed Ecoflex and PDMS substrate, as shown in Figure 5a-c, which show the surface of the 30-layer laminate of PDDA (GO)/PVA (MMT) after 0%, 10%, and 30% pre-strain, respectively. After pre-strain was applied, it was confirmed that many wrinkles appeared on the surface of the substrate where the barrier layer was laminated and subsequently relaxed. The SEM image in Figure 6 shows the surface of the substrate after 100 cycles of stretching (30% strain) of the stretchable gas barrier film fabricated by applying different pre-strains (0%, 10%, and 30%). The barrier layer could not withstand 30% stretching, as indicated by the large number of cracks in the stretched samples without pre-strain (Figure 6a). The 10% pre-strain sample (Figure 6b) showed less cracks than the samples prepared without pre-strain, but a few cracks were still observed. In contrast, the 30% pre-strain sample showed high durability after 100 stretching cycles. It was confirmed that the gas barrier film was able to withstand stretching well due to wrinkles formed by applying pre-strain. Thus, it can be concluded that the barrier layers are vulnerable to stretching but show high durability during contraction. WVTR and Light Transmittance of Stretchable Gas Barreir Film We compared the WVTR values of 30 laminated layers of PDDA/PVA (MMT), PDDA (GO)/PVA, and PDDA (GO)/PVA (MMT). When GO and MMT are laminated, compared to when each is used alone, the materials are more stably bonded by hydrogen bonding of crosslinking effects between GO and MMT, thus filling the vacancies formed when the layers are laminated individually and extending the diffusion path length for a water molecule. As shown in Figure 7, the WVTR value of the PDDA (GO)/PVA (MMT) films was greatly reduced. We compared the WVTR values after 100 stretching cycles (30% strain) of the 30 laminated layers of PDDA (GO)/PVA (MMT) on the stretchable substrate subjected to the applied pre-strain. The WVTR values of the gas barrier films were analyzed using the Ca test method. Figure 8a shows that the WVTR values of the samples with a 30-layer PDDA (GO)/PVA (MMT)-coated stretchable substrate without pre-strain, with 10% pre-strain, and with 30% pre-strain were 8.8, 2.8 × 10 −1 , and 2.5 × 10 −2 g/m 2 day. The OTR values of the films were analyzed by MOCON. As shown in Figure 8b, the 30-layer PDDA (GO)/PVA (MMT)-coated stretchable substrate without pre-strain was 178 cc/m 2 ·day, which decreased to 43.88 and 11.91 cc/m 2 ·day for the samples subjected to 10% and 30% pre-strains, respectively. Wrinkles were formed in the film with the gas barrier laminated after applying 30% pre-strain. Therefore, when the stretching test (30% strain) was performed, the barrier layer did not crack, and the stretchable gas barrier film showed improved WVTR and OTR values. As light transmittance is also important for gas barrier films, the transmittance of the 30-layer PDDA (GO)/PVA (MMT)-coated Ecoflex/PDMS substrate was measured using a UV-Vis spectrophotometer. Figure 9 shows the transmittance values of the uncoated Ecoflex/PDMS substrate and the 30-layer coating of the PDDA (GO)/PVA (MMT) substrate at 550 nm were 86.54% and 80.84%, respectively. This shows that the PDDA (GO)/PVA (MMT) 30-layer stretchable gas barrier film was sufficiently transparent. Therefore, this stretchable gas barrier film can be used in electronic devices requiring transparency and stretchability. Discussion Herein, we fabricated a substrate with 86.54% transmittance by mixing PDMS and Ecoflex in an 8:2 weight ratio. After applying various pre-strains (0%, 10%, 30%) to the Ecoflex/PDMS substrate, PDDA (GO) and PVA (MMT) solutions with opposite charges were laminated using the LbL method, a non-vacuum process to form barrier layers with good interlayer bonding via electrostatic attraction. LbL assembly by absorption from the solution is a good approach to the fabrication of near perfectly oriented and aligned nano-material/polymer multilayers. The experimental results confirm that the method of laminating barrier layers after applying pre-strain produced wrinkles. These wrinkles allow strain to be endured on the surface of the barrier layer, preventing cracks during the stretching test and resulting in an improved WVTR. The 30-layer coated Ecoflex/PDMS substrate had a low WVTR rate of 2.5 × 10 −2 g/m 2 day after 100 cycles of the stretching test. These studies found that the nano-material/polymer multilayer structure has a strong tendency to be suitable as a stretchable gas barrier film. The PDDA (GO)/PVA (MMT) multilayer coated Eco-flex/PDMS gas barrier film shows great potential for use in stretchable applications. In addition, this simple and fast method can be suitable for stretchable gas barrier fabrication technologies.
v3-fos-license
2021-03-19T13:24:36.826Z
2021-03-19T00:00:00.000
232271984
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.579128/pdf", "pdf_hash": "715a112ad10730eb63d039a2b65c64fc4e280a5e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41787", "s2fieldsofstudy": [ "Mathematics", "Education", "Computer Science" ], "sha1": "715a112ad10730eb63d039a2b65c64fc4e280a5e", "year": 2021 }
pes2o/s2orc
Improving the Precision of Ability Estimates Using Time-On-Task Variables: Insights From the PISA 2012 Computer-Based Assessment of Mathematics Log-file data from computer-based assessments can provide useful collateral information for estimating student abilities. In turn, this can improve traditional approaches that only consider response accuracy. Based on the amounts of time students spent on 10 mathematics items from the PISA 2012, this study evaluated the overall changes in and measurement precision of ability estimates and explored country-level heterogeneity when combining item responses and time-on-task measurements using a joint framework. Our findings suggest a notable increase in precision with the incorporation of response times and indicate differences between countries in how respondents approached items as well as in their response processes. Results also showed that additional information could be captured through differences in the modeling structure when response times were included. However, such information may not reflect the testing objective. INTRODUCTION Computers have become increasingly common implements used in classroom activities over the past few decades. As a reflection of this trend, large-scale educational assessments have moved from paper and pencil based tests to administrated computer assessments. In addition to being more efficient and reducing human error, computer-based assessments allow for a greater variety of tasks. Further, interactive computer environments can be used to generate log files, which provide easy access to information concerning the examinee response process. These log files contain timestamped data that provide a complete overview of all communication between the user-interface and server (OECD, 2019). As such, it is possible to trace how respondents interact with the testing platform while gathering information about the amount of time spent on each task. The first computer-based administration of the Programme for International Student Assessment (PISA) dates back to 2006(OECD, 2010. However, more extensive studies involving log files were enabled through the release of the PISA 2009 digital reading assessment (OECD, 2011). In this context, time-on-task and navigating behaviors can be extracted from these log files as relevant variables. The information derived from variables of this type can help teachers further understand the solution strategies used by students while also enabling a substantive interpretation of respondent-item interactions Goldhammer and Zehner, 2017). The variables taken from log files can also be included in sophisticated models designed to improve student proficiency estimations (van der Linden, 2007). While log file data from computer-based assessments have been available for several years, few studies have investigated how they can be used to improve the measurement precision of resulting scores. Using released items from the 2012 PISA computer-based assessment of mathematics, this study thus explored the potential benefits of incorporating time-on-task variables when estimating student proficiency. We specifically compared three different models to advance the current understanding of what time-on-task adds to scores resulting from an international large-scale assessment program. Time-On-Task and Item Responses Several previous studies have investigated the relationship between time-on-task and item responses. For example, Goldhammer et al. (2015) studied the relationship between item responses and response times through a logical reasoning test, thus finding a non-linear relationship between reasoning skills and response times. Further, Goldhammer and Klein Entink (2011) investigated how time-on-task and item interactivity behaviors were related to item responses using complex problem-solving items. In addition, Naumann and Goldhammer (2017) found a non-linear relationship between time on task and performance on digital reading items from the PISA 2009 assessment. Finally, Goldhammer et al. (2014) studied the relationship between time-on-task, reading, and problem solving using PIAAC data. Results indicated that the association between time-on-task and performance varied from negative to positive depending on the subject matter and type of task. In large-scale educational assessments, student proficiency is mainly estimated through the item response theory (IRT) framework (von Davier and Sinharay, 2013). Here, categorical item-response data are considered manifestations of an underlying latent variable that is interpreted as, for example, mathematics proficiency. While time-on-task can be incorporated in several different ways from an IRT perspective (van der Linden, 2007), the state-of-the-art view considers them as realizations of random variables, much like actual item responses (Kyllonen and Zu, 2016). A hierarchical model is most commonly used with time-on-task data. Specifically, a two-level structure is used to incorporate time-on-task, item responses, and latent variables into a single model (van der Linden, 2007). While the hierarchical modeling framework has the advantage of considering both response accuracy and response times as latent variables, it has practical limitations in that it requires specialized software for fitting the model. Molenaar et al. (2015) illustrated how the hierarchical model can be slightly simplified such that standard estimation techniques could be used. This type of formulation of the model allows the use of both generalized linear latent variable models (Skrondal and Rabe-Hesketh, 2004) and non-linear mixed models (Rijmen et al., 2003) with itemresponse and time-on-task data. Furthermore, the approach outlined by Molenaar et al. (2015) encompasses not only the standard hierarchical model (with the necessary simplification) but also its extensions which allow for more complex relationship between time-on-task and ability such as the model of Bolsinova and Tijmstra (2018). For these reasons, this study pursued the approach of Molenaar et al. (2015) for its analysis of PISA 2012 data. The Present Study This study investigated the utility of combining item responses with time-on-task data in the context of a large-scale computerbased assessment of mathematics. It also evaluated the properties of the employed model with respect to each participating country 1 . Specifically, the framework developed by Molenaar et al. (2015) was used to investigate how measurement precision was influenced by incorporating item responses and timeon-task data into a joint model. We also explored countrylevel heterogeneity in the time-on-task measurement model. As such, the model proposed for this analysis of computer-based large-scale educational assessments implied a different set of underlying assumptions than current procedures. Specifically, we viewed response-time data as comprising an extra information set that enabled us to gain additional insight regarding the latent construct of interest. This also implies that any inference regarding the underlying construct at the country level would potentially change through the proposed approach as opposed to current analysis methods, which this study also investigated. The three following research questions were thus proposed: • RQ1: What changes occur in the overall ability estimates and their level of precision regarding PISA 2012 digital mathematics items when time-on-task data are included in the analysis? • RQ2: How do time-on-task model parameters differ across items and countries? • RQ3: What changes occur in country-level performance when time-on-task data are considered in the analysis? Our findings should add to the current literature on the relationship between time-on-task and responses to performance items. Our results also have important implications for large-scale assessment programs in regard to evaluating the added measurement precision that is granted by incorporating additional data sources (e.g., time-on-task). Such investigations can inform large-scale assessment programs about whether and how time-on-task data should be included in models designed to generate operational results reports. The 2012 PISA Computer-Based Assessment of Mathematics PISA administered its first computer-based mathematics literacy assessment as part of its fifth program edition. A total of 32 countries participated in this effort. In this context, 40 min were allocated for the computerbased portion of the test, with math items arranged in 20 min clusters that were assembled with digital reading or problem-solving prompts (OECD, 2014a). A total of 41 math items were selected for this assessment. These items varied from standard multiple-choice to constructed response formats. Table 1 presents the characteristics of the PISA sample by country (sample size, Math performance, and variation) for the whole computer-based of mathematics clusters (41 items) as well as to the subsample with available and valid log-file data (10 items). We utilized data from a total of 18,970 students across 31 countries. We excluded data from Chile since log-file data for two of the analyzed items were unavailable (I20Q1 and I20Q3). Students with invalid information (e.g., those that did not receive final scores or had incomplete timing information) were also excluded from the analysis. On average, the sample size of each country is around 600 (S.D.= 333), the percentage of female is 50% across all the countries. The average total time on the 10 items varied from 10.95 to 17.92 min. Brazil was the country with the highest percentage of missing responses (9.92%) on the analyzed items. The analyzed log-file data from 10 items were made publicly available on the OECD website. We thus extracted the time Frontiers in Psychology | www.frontiersin.org students spent on analyzed items and their final responses (i.e., response accuracy). All items were allocated in three units (CD production: items "I15Q1, " "I15Q2, " and "I15Q3"; Star points: items "I20Q1, " "I20Q2, " "I20Q3, " and "I20Q4"; Body Mass Index: items "I38Q3, " "I38Q5, " and "I38Q6") and were administered in the same cluster. Table 2 shows the reported item characteristics by OECD (international percent of correct responses, and thresholds used for scaling the items in PISA 2012) as well as the average response time and percentage of missing responses by item. Although the effects of the item position were likely negligible due to the length of the computer-based assessment (OECD, 2014b), we were still able to determine that the percentages of missing data were larger for items located at the end of the cluster. We used the full information maximum likelihood approach (FIML) featured in Mplus version 7.3 (Muthén and Muthén, 2012) to incorporate all available data into our analyses. Doing this, the missing responses were treated as missing at random (MAR) and all the available data were incorporated in the modeling. Statistical Analyses This study compared three measurement models to estimate student proficiency based on the abovementioned PISA dataset. All these models can be seen as special cases of the framework of Molenaar et al. (2015). They are: • Model 1 (M1): It provided a baseline and thus only included response accuracy in a unidimensional IRT framework. The model can be seen as a special case of the framework of Molenaar et al. (2015) in which it is assumed that there is no relationship between latent proficiency and response time data. • Model 2 (M2): A multidimensional latent variable model for the response accuracy and response times, where the response accuracy are related to a latent proficiency and the response times are related to a latent speed. The latent factors are assumed to be correlated. This is a variant of the model described in Molenaar et al. (2015): Here the relationship between the latent proficiency and response times is specified through the relationship between the latent proficiency and latent speed. • Model 3 (M3): A multidimensional latent variable model for the response accuracy and response times, where response accuracy is related to a latent proficiency and the response times are related to a latent speed and proficiency. This is also a special case of the approach of Molenaar et al. (2015) For M3, the second latent variable (τ * ) was rotated to obtain the estimates of the transformed factor loadings (i.e., the factor loadings for speed correspond to the relationship between response time and the latent variable which has the same interpretation as τ in M2). (1) Since φ ic s are not freely estimated in the models M3_Full, M3_Strong, and M3_Weak, the correlation between the latent variables is identified. For the M3_Struct model, however, we constrained the variance of the latent speed to be the same as the estimates from the M1_Full model to make sure that the correlations between X i s and θ will be the same as in model 1 and therefore θ will have similar interpretation as M1. (2) After obtaining the parameter estimates in the M3_Struct model, the second latent variable (τ * ) was rotated to match the latent speed variable in M2, and ρθτc was calculated. Frontiers in Psychology | www.frontiersin.org (1) The range indicates the minimum and maximum values of the BIC statistic by country. (2) The suffix "_Full" indicates full measurement invariance (fixing all item parameters to be equal to the international estimates), "_Strong" indicates strong measurement invariance (country-specific residual variances are allowed to be estimated, while time intensity parameters and factor loadings are fixed to be equal to the international estimates), "_Weak" means weak measurement invariance model (country-specific residual variances and country-time intensity parameters are freely estimated, while factor loadings are fixed to be equal to the international estimates), and "_Struct," structural measurement invariance (wholly fitted time-related parameters, i.e., all time-related parameters are freely estimated in each country). in which the relationship between latent proficiency and response times goes not only through the relationship between latent proficiency and latent speed, but also through the direct relationship between the ability and individual response times. For this model we employed a particular rotation approach described in Bolsinova and Tijmstra (2018). Figure 1 shows the graphical representation of the models across PISA countries. For comparability purposes, the items' parameters for response accuracy were fixed from model 1 into models 2 and 3. This approach assures that the models are on the same scale since the relationship between response accuracy and latent proficiency will be the same across models. This section discusses the mathematical formulations used in each model. The steps used to estimate model parameters for use with the PISA dataset and an analysis of measurement invariance across countries are discussed later. Model Specification Let X = (X 1 , . . . , X I ) be a random vector of responses on the I items and T = T p1 , . . . , T pI be a random vector of response times on the same items with realizations x p· = x p1 , . . . , x pI and t p· = t p1 , . . . , t pI , for each person p, respectively. For response accuracy, we adopted the graded response model (GRM) used by Samejima (1969). This was done because some PISA items used a partial scoring method and, unlike other IRT models used for polytomous data (e.g., the partial credit model), the GRM is equivalent to simple factor analytic models in application to discrete data and can therefore be fitted using standard factor analysis software and structural equation models. The differences between the various IRT models used for polytomous data are usually very small; in our case, only three items out of 10 allowed partial scoring. The GRM specifies the conditional probability to obtain each category k ∈ [1 : m], where m is the highest possible category for the item. The conditional probability of obtaining this score or higher, given the latent trait θ , is defined by where a i is the item factor loading/discrimination parameter, and b ik is the item category threshold parameter 2 . The probability of obtaining a particular response category k is then where Pr(X i ≥ 0) = 1 and Pr(X i ≥ m+1) = 0. When m = 2, the GRM reduces to the two-parameter logistic IRT model used by Birnbaum (1968), with only one difficulty parameter b i per item instead of multiple threshold parameters. M1 defined exclusively by Equation (2). There are also cases in which both responses and response times are used to estimate respondent proficiency. Here, instead of simply specifying the model for response accuracy, we must specify the full model for the joint distribution of response accuracy and response times. For Model 2, we thus adopted the hierarchical modeling approach used by van der Linden (2007), which requires not only the specification of the measurement model for response accuracy (in our case, the GRM) but also the specification of the measurement model for the response times, and the models for the relationship of the latent variables in the two measurement models. The model used for the relationship between item parameters in the two measurement models is often specified, as well. However, as shown by Molenaar et al. (2015), excluding this relationship does not substantially change the parameter estimates, especially when large sample sizes are involved. Furthermore, the use of standard estimation techniques is prevented when including a model for the item parameters. Given the very large sample sizes available in this analysis, we thus specified a higher-order relationship on the person side (i.e., the model for latent variables), but did not do so on the item side. The joint distribution of response accuracy and response times is conditional to both latent proficiency and speed (denoted by τ ) in the hierarchical model. In this case, it is assumed to be a product of the marginal distribution of response accuracy, which only depends on latent proficiency, and the marginal distribution of response time, which only depends on latent speed. We refer to this as a simple-structure model because every observed variable therein is solely related to one latent variable. This differs from the extension of the hierarchical model used by Bolsinova and Tijmstra (2018), which includes direct relationships between response times and latent proficiency in addition to its relation to latent speed. A lognormal model with item-specific loadings was used for the response times (Klein Entink et al., 2009). It is equivalent to the one-factor model used for log-transformed response times. The conditional distribution of response time on item i given the latent speed variable is defined by which is the lognormal distribution in which the mean is dependent on the item time intensity ξ i and the latent speed τ . The strength of the relationship between the response time and the latent speed depends on the factor loading λ i . Meanwhile, σ 2 i denotes the item-specific residual variance. The dependence between the latent proficiency and the latent speed variables is modeled using a bivariate normal distribution with correlation parameter ρ. This correlation between the latent variables specifies the indirect relationship between response times and latent proficiency. In turn, this allows us to strengthen the measurement of proficiency (i.e., increase measurement precision) by using the information contained in the response times. The magnitude of the improvement in measurement precision is solely determined by the size of the correlation between the latent speed and latent proficiency (Ranger, 2013). M3 employed the same model for response accuracy as that used in M1 and M2. However, a different model was used for response times. That is, the mean of the lognormal distribution of response time was dependent on two latent variables, as follows: where the cross-loading φ i specifies the strength of the relationship between response time and proficiency. Here, an asterisk is used for the latent variable τ * because it should be interpreted differently from the simple-structure model (M2). Since the cross-loadings between latent proficiency and response FIGURE 3 | Estimates of the countries' means and their respective confidence intervals for the different models. Frontiers in Psychology | www.frontiersin.org time are freely estimated, the correlation between θ and τ * is not identified and is instead fixed to zero so that τ * can be interpreted as a latent variable, thus explaining the covariance of the response times that cannot be explained by latent proficiency. However, it is possible to rotate the latent variable τ * to match the latent speed variable of the simple-structure model. Following Bolsinova and Tijmstra (2018), we will apply a rotation of the factors such that τ * is the latent variable that explains most of the variance of response times. In that case, the correlation between latent proficiency and speed and the corresponding values for the transformed factor loading in the two dimensions can be calculated. Analysis Strategies We used the LOGAN R package version 1.0 (Reis Costa and Leoncio, 2019) to extract student response times and accuracy from the PISA 2012 log file containing data for 10 digital math items. We then conducted analyses according to two steps. First, we fitted all three models by combining the sample consisting of 31 countries to estimate model parameters at an international level. Then, we analyzed the models across PISA countries by fixing specific parameters from previous analyses to allow cross-model comparisons. We also evaluated parameter invariance in the response time model. All model parameters were estimated using the restricted maximum likelihood method in Mplus version 7.3 (Muthén and Muthén, 2012). Table 3 summarizes the analytical framework used in the first step. Item discrimination (a i s) and threshold parameters (b i s) were freely estimated for Model 1, with the proficiency mean (µ θ ) and variance (σ 2 θ ) fixed to 0 and 1, respectively. To enable model comparisons, the item discrimination and threshold parameters were not estimated for M2 and M3 but were rather fixed to the parameter estimates from M1. For these models, the response time parameters (ξ i s, λ i s, σ 2 i s, and φ i s) and the mean and variance of the proficiency were freely estimated. All analyses were conducted assuming the same graded response model for the item-response modeling. We evaluated the fit of the GRM model for M1 (in which the item discrimination and threshold parameters were freely estimated) by calculating two approximate fit statistics [i.e., the Root Mean Square Error of Approximation (RMSEA) and the Standardized Root Mean Square Residual (SRMR)] using the complete dataset in the mirt R package (Chalmers, 2012). As a guideline, cutoff value close to 0.08 for SRMR and a cutoff value close to 0.06 for RMSE indicated acceptable fit (Hu and Bentler, 1999). We conducted country-level analyses in the second step. Table 4 shows the fixed and freely estimated parameters for each model. Here, models containing the suffix "_Full" indicate full FIGURE 4 | EAP reliabilities estimates per country and model. Frontiers in Psychology | www.frontiersin.org measurement invariance. That is, we estimated each country's mean and variance for the latent variables (θ c , τ c , or ρ θ τ c ), fixing all item parameters (a i s, b i s, ξ ic s, λ ic s, σ 2 ic s, or φ ic s) with international estimates as derived in step one. Models containing the suffix "_Strong" indicate strong measurement invariance in which item-specific residual variances (σ 2 ic s) are allowed to be estimated, instead. Weak measurement invariance models contain the suffix "_Weak." Here, both the item-specific residual variance (σ 2 ic s) and item-time intensity parameters (ξ ic s) were freely estimated. In this case, however, the mean of the latent speed variable was fixed to 0 for model identification. Lastly, structural measurement invariance (suffix "_Struct") indicates all time-related parameters are freely estimated (ξ ic s, λ ic s, σ 2 ic s or φ ic s). For model identification, we fixed the mean and the variance of the latent speed variable to 0 and 1, respectively. We also incorporated a new constraint in model "M3_Struct" to allow the free estimation of the cross-loading parameter (φ ic s). In this case, we constrained the variance of the latent speed to be the same as the estimates from the M1_Full model to make sure that the correlations between X i s and θ will be the same as in model 1 and therefore θ will have similar interpretation as in M1. We estimated student abilities using the Expected a Posteriori (EAP) approach (Bock and Mislevy, 1982) and evaluated measurement precision using the EAP-reliability method (Adams, 2005) and the average of the standard errors of the ability estimates. Finally, we computed the Bayesian Information Criterion (BIC) for model selection (Schwarz, 1978). RESULTS We addressed our research questions by assessing the results according to the following three steps: (1) we estimated the overall ability estimates and their level of precision regarding PISA 2012 digital math items by the three measurement models, (2) presented our findings about the invariance of responsetime model parameters across items and countries, and (3) showed changes in country-level performance when time-ontask was considered. RQ1: Overall Performance We first investigated the model fit for the graded response model. This model was assessed as having a good fit based on its SRMSR (0.036). It also exhibited acceptable fit according to its RMSEA (0.050). We thus concluded that our baseline model had sufficiently good overall fit for continued analyses, including those related to time-ontask variables. Table 5 shows the overall estimates for student abilities and the measurement precision of these estimates in relation to the PISA 2012 digital math items across the different models. Although there was no substantial difference, M2 and M3 (i.e., the simple-structure hierarchical model and the cross-loadings model, respectively) exhibited increased measurement precision (as captured by larger EAP reliability estimates and smaller average standard errors) when response times were included in the modeling framework. RQ2: Measurement Invariance We investigated measurement invariance of the time-on-task parameters for each country with both M2 and M3. We also calculated the BIC for each individual model and summarized these statistics to identify the level of invariance that best represented the data overall ( Table 6). As such, the assumption of invariance of the model's parameters does not hold for most countries and models. Weak measurement invariance were preferred in most of the cases (i.e., there was country-specific heterogeneity in the time intensity (ξ i ) and residual variance (σ 2 i ) parameters for the time-on-task measurement models). To illustrate the differences in the time-on-task measurement model parameters, Figure 2 presents the estimated time-intensity parameters for each item in each country as applied to the preferred model in the simple-structure framework (M2). The graph indicates that students in all analyzed countries placed the most effort into answering the first item, I20Q1, from Unit 20 (Star Point unit). However, the pattern of estimated timeintensity between different items varied according to country. For example, the estimated time intensity of item I38Q06 was larger than that of item I15Q01 for several countries, but the opposite was found for about just as many countries. Figure 3 shows the estimated country means in computer-based mathematical literacy and the associated confidence intervals for the three measurement models. The estimated means did not show substantial discrepancies for the analyzed countries between the different models. Figure 4 shows the estimated reliability of the EAP ability estimates for each country. Measurement precision increased for all countries when time-on-task variables were included; here, the model containing cross-loadings had the highest estimated EAP reliability. As illustrated in Figure 5, there was a decrease in average standard errors for ability estimates when time-on-task variables were included. Figure 6 shows the correlations between EAP ability estimates from the baseline model and from those including time-on-task variables. Ability estimates from models that included crossloadings generally had lower correlations with the baseline model-based ability estimates as compared to models that did not include cross-loadings. This indicates that the ability estimates from model 3 captured an additional source of information. However, this may not have reflected the test objective (i.e., estimating student computer-based mathematical literacy). SUMMARY AND DISCUSSION This study examined the extent to which inferences about ability in large-scale educational assessments were affected by and improved by including time-on-task information in the statistical analyses. This issue was specifically explored using data from the PISA 2012 Computed-Based Assessment of Mathematics. In line with statistical theory, model-based measurement precision (as captured by the EAP reliability estimates) improved when using the standard hierarchical model as opposed to the response accuracy only model for each of the 31 considered countries that participated in the PISA program. This increase was notable for most countries, with many showing increases in estimated EAP reliability at or above 0.05. If such a version of the hierarchical model can adequately capture the data structure, then this suggests it can also provide a notable increase in precision over the default response-accuracy only models. For practically all countries, model-based measurement precision was further increased when using the extended version of the hierarchical model, which allowed a direct link between response times and ability by including crossloadings (i.e., rather than using the standard hierarchical model). This model successfully extended the hierarchical model by considering overall response speed as relevant to the estimated ability while also allowing individual item-response times to be linked to said ability if such patterns were present in the data. Thus, the model allowed time-on-task to provide more collateral information when estimating ability than was possible when using the standard hierarchical model. This increased precision was also notable for most countries (generally between 0.02 and 0.03). However, the increase was generally less sizable than those obtained by using the hierarchical model instead of a response-accuracy only model. Thus, the biggest gain in precision was already obtained by using a simple-structure hierarchical model; extending the model by incorporating cross-loadings generally only resulted in modest additional gains. We investigated the extent to which time-on-task parameters could be considered invariant across countries for both the simple-structure hierarchical model and the extension that included cross-loadings. The results suggested that only weak measurement invariance existed. As such, full or strong measurement invariance did not hold. That is, our findings suggest that countries may differ both in item time-intensity (capturing how much time respondents generally spent on items) and the item-specific variability of the response times (i.e., the degree to which respondents differed in the amounts of time they spent on particular items). This suggests relevant differences between countries in regard to how respondents approached items as well as in their response processes. Measurement precision improved for all countries when using the selected versions of M2 and M3 (i.e., over the precision levels obtained using M1). Since changing the model used to analyse the data may also affect model-based inferences, we also analyzed the extent to which such inferences would be affected by these changes. Here, no country showed a substantial change in estimated mean, thus suggesting that the overall assessment of proficiency levels for different countries was not heavily affected by a model change. However, the estimated correlations between the individual ability estimates obtained using M1, M2, and M3 showed small deviations from 1 for many countries, suggesting that the ability being estimated does not overlap perfectly across the three models. The differences between M1 and M3 were most notable in this regard. That is, they generally resulted in the lowest correlations between ability estimates. It is thus not surprising that these two models had the lowest correlation; they also had the largest differences in modeling structure. However, one should carefully consider which of the models best operationalizes the specific ability that will be estimated. Additional validation research is thus needed to determine whether the inclusion of time-on-task information results in overall improved measurement quality. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found here: https://www.oecd.org/pisa/pisaproducts/ database-cbapisa2012.htm. AUTHOR CONTRIBUTIONS DRC: conceptualization. DRC, MB, JT, and BA: methodology and paper writing. DRC: data analysis. All authors contributed to the article and approved the submitted version. FUNDING The authors received financial support from the research group Frontier Research in Educational Measurement at University of Oslo (FREMO) for the publication of this article. JT was supported by the Gustafsson & Skrondal Visiting Scholarship at the Centre for Educational Measurement, University of Oslo.
v3-fos-license
2023-05-12T06:16:28.020Z
2023-05-10T00:00:00.000
258618245
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "81c5b564ce20aa2f5d2c372258aa10235d175b2e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41788", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a2aaf14c76b45da4bf13f6da4c563ebf7be12a96", "year": 2023 }
pes2o/s2orc
Targeting Poly(ADP)ribose polymerase in BCR/ABL1-positive cells BCR/ABL1 causes dysregulated cell proliferation and is responsible for chronic myelogenous leukemia (CML) and Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph1-ALL). In addition to the deregulatory effects of its kinase activity on cell proliferation, BCR/ABL1 induces genomic instability by downregulating BRCA1. PARP inhibitors (PARPi) effectively induce cell death in BRCA-defective cells. Therefore, PARPi are expected to inhibit growth of CML and Ph1-ALL cells showing downregulated expression of BRCA1. Here, we show that PARPi effectively induced cell death in BCR/ABL1 positive cells and suppressed colony forming activity. Prevention of BCR/ABL1-mediated leukemogenesis by PARP inhibition was tested in two in vivo models: wild-type mice that had undergone hematopoietic cell transplantation with BCR/ABL1-transduced cells, and a genetic model constructed by crossing Parp1 knockout mice with BCR/ABL1 transgenic mice. The results showed that a PARPi, olaparib, attenuates BCR/ABL1-mediated leukemogenesis. One possible mechanism underlying PARPi-dependent inhibition of leukemogenesis is increased interferon signaling via activation of the cGAS/STING pathway. This is compatible with the use of interferon as a first-line therapy for CML. Because tyrosine kinase inhibitor (TKI) monotherapy does not completely eradicate leukemic cells in all patients, combined use of PARPi and a TKI is an attractive option that may eradicate CML stem cells. Results Effect of olaparib on leukemia cells. Expression of BRCA1 by bone marrow-derived cells from BCR/ ABL1 transgenic (Tg) mice was lower than that by cells from wild-type (WT) mice (Fig. 1a). BRCA1 is a key molecule involved in the HRR pathway. Therefore, we examined the effect of BCR/ABL1 expression on HRR activity. As expected, HRR activity was downregulated upon expression of BCR/ABL1 (Fig. 1b). These results suggest that BCR/ABL1-expressing cells exhibit homologous recombination defects (HRD). Next, bone marrow-derived mononuclear cells (MNC) from WT and BCR/ABL1 Tg mice were exposed to the PARPi olaparib in vitro, and cell death was analyzed by Annexin V/propidium iodide staining. Olaparib induced cell death in both WT and BCR/ABL1 Tg mouse-derived MNCs in a dose-and time-dependent manner. BCR/ABL1 Tg mouse-derived MNCs were more sensitive to olaparib than those from WT mice (Fig. 1c,d). Olaparib prevents transformation activity by BCR/ABL1. It would be interesting to ascertain whether olaparib prevents transformation, or whether PARP is required for transformation by BCR/ABL1. www.nature.com/scientificreports/ Therefore, we examined BCR/ABL1-mediated transformation activity in Rat-1 cells 12 . Rat-1 cells were either mock-infected or infected with BCR/ABL1 (Supplemental Fig. 1a), and colony formation activity was monitored in the presence or absence of olaparib. Treatment with olaparib reduced the colony transformation activity of BCR/ABL1 (Supplemental Fig. 1b,c). Olaparib reduces the potential of BCR/ABL1-expressing cells to repopulate HSCs. Next, we performed colony assays to examine the repopulation activity of HSCs in order to analyze the effect of PARP inhibition. The colony-forming activity of HSCs from WT and BCR/ABL1 Tg mice was examined sequentially after replating under continuous olaparib exposure. Intriguingly, colony-forming activity by BCR/ABL1expressing HSCs was abolished after the third replating, whereas wild-type HSCs retained this activity (Fig. 2a). We hypothesized that genomic instability reduces the HSC-repopulating potential of BCR/ABL1-expressing cells. Usually, HSCs arrest at G0 and only enter the cell cycle if they are stimulated. Also, CML progenitors demonstrate increased susceptibility to repeated cycles of chromosome damage, repair, and damage via a breakage-fusion-bridge (BFB) mechanism 13 . Therefore, we examined BFB generation under conditions of cytokine stimulation. Even though cells were stimulated with cytokines, the number of wild-type cells was not different from that of BCR/ABL1-expressing cells. As expected, as cells progressed through the cell cycle, the number of BFBs in the Parp1 -/and Parp1 +/+ BA Tg/− HSC population increased moderately, whereas the number in the Parp1 −/− BA Tg/− HSC population was markedly higher than that in the Parp1 +/+ , Parp1 −/− , and Parp1 +/+ BA Tg/− HSC populations ( Fig. 2b and Supplemental Fig. 1d). Next, to gain insight into how olaparib affects stem cell maintenance, we compared differentially expressed genes between DMSO-treated and olaparib-treated cells using RNA sequencing-based transcriptome analysis. www.nature.com/scientificreports/ When we focused on highly expressed genes in olaparib -treated cells, we identified the genes associated with the TP53 signaling pathway (Table S1 and Supplemental Fig. 2a). Conversely, a focus on downregulated genes in olaparib-treated cells identified genes involved in oxphosphorylation (OXPHOS) (Table S1 and Supplemental Fig. 2b). Activation of the cGAS/STING pathway in BCR/ABL1-expressing cells. PARPi have a broad range of biological effects 21 that may have caused the observed reduction in survival of BCR/ABL1-positive cells. The cGAS/STING pathway, which is responsible for de novo synthesis of antiviral type I interferons (IFNs) and their related gene products, is triggered by cytosolic DNA to induce antitumor immune responses 22 . Accumulation of DNA damage following PARP inhibition leads to leakage of damaged double-stranded DNA into the cell cytoplasm, which activates innate immune signaling through the cGAS-STING pathway, leading to increased expression and release of type I IFN 17,23,24 . IFN was once the standard frontline treatment for CML because its pleiotropic mechanism of action includes immune activation and specific targeting of CML stem cells 25 . Therefore, based on the hypothesis that cGAS/STING pathway-mediated activation of the IFN machinery exerts cytotoxic effects on CML LSC, we investigated the effect of olaparib on activation of the cGAS/STING pathway. As expected, olaparib induced cGAS-bound micronuclei (Fig. 3a,b) and TBK1 phosphorylation, which is crucial for STNG activation (Fig. 3c,d). Furthermore, we observed increased expression of IFN-α and CCL5 mRNA (Fig. 3e, and Supplemental Fig. 3a,b). RNA sequencing also revealed upregulation of IFN-responsive genes (Fig. 3f). Olaparib inhibits BCR/ABL1-dependent leukemia in vivo. Next, we evaluated the effects of PARP inhibition in BCR/ABL1-expressing cells using an in vivo model of hematopoietic cell transplantation. Mouse HSCs were infected with a BCR/ABL1-expressing retrovirus and then transplanted into lethally irradiated mice. Starting at 1 day post-transplantation, mice received an oral 100 mg/kg olaparib (five times per week) or vehicle. Death from BCR/ABL1-mediated leukemia was observed in sham-treated mice at 1 month post-transplantation. All of the mice that received vehicle died within 6 months. However, none of the olaparib-treated mice developed leukemia, and all survived for 6 months (Fig. 4a). After all sham treated mice had died, olaparib administration to the other mice was terminated, and their survival was monitored for about 12 months posttransplantation. Three out of six mice died after termination of olaparib. Overall survival was 50%. Thus, olaparib extends survival significantly (p = 0.0005). Parp1 knockout increases survival of BCR/ABL1 transgenic mice. To further confirm the role of PARP inhibition in preventing BCR/ABL1-mediated leukemogenesis, we used a genetic approach. Instead of inhibiting PARP using an inhibitor, we crossed Parp1 knockout mice with BCR/ABL1 Tg mice and examined leukemia development and death. As shown previously, leukemia in BCR/ABL1 Tg mice developed 6 months after birth, after which time the mice started to die 18 . Although there was no difference in white blood cell counts between Parp1 wild-type (Parp1 +/+ BA Tg/− ) and Parp1 knockout BCR/ABL1 Tg mice (Parp1 −/− BA Tg/− ) (Supplemental Fig. 4), we found it interesting that leukemia development was delayed in Parp1 −/− BA Tg/− ; these mice survived longer than Parp1 +/+ BA Tg/− mice (Fig. 4b). Discussion Targeting BRCA1/2deficient tumors with PARPi is a standard therapeutic option for HBOC. Currently, PARPi are being developed to target not only BRCA1/2defective HBOC, but also other types of cancer that harbor HRD 26 . Therefore, the genomic instability mediated by BCR/ABL1-mediated downregulation of BRCA1 expression in CML and Ph1-ALL is an attractive candidate for targeted therapy with PARPi. Here, a murine transplantation model yielded data supporting the potential of PARPi for the treatment of BCR/ABL1-positive leukemia. This result was also supported by results obtained using genetic models constructed by crossing Parp1 knockout mice with BCR/ABL1 transgenic mice, and by previous reports 27,28 . PARPi exert their function against multiple PARP family proteins. Among them, olaparib is a potent inhibitor of PARP1 and PARP2. Here, we assessed only Parp1 knockout mice. Although 29 . This difference may be due to the supporting effects of PARP2 on PARP1. We and others showed previously that PARPi have different cytotoxic effects depending on the BCR/ABL1-positive leukemic cell line [30][31][32] ; however, these studies did not characterize differences between the cell lines 30,33 . By contrast, PARPi exerts a cytotoxic effect against MNCs and HSCs from BCR/ABL1 Tg mice. Therefore, we hypothesized that these conflicting results can be explained by different genetic changes in different cells. Accumulation of multiple genetic alterations and disruption of cell signaling pathways play crucial roles in leukemic transformation. Nontransformed cells from BCR/ABL1 Tg mice carry relatively simple genetic alterations, i.e., those that affect only BCR/ABL1 expression, which mimics the chronic phase of CML. The results of our in vitro repopulation colony assay suggest that PARPi attenuates HSC homeostasis in BCR/ ABL1-positive cells. HSCs localize in hypoxic environments and so are not able to generate ATP using oxygenconsuming mitochondrial OXPHOS. Therefore, they use the glycolytic system predominantly. Whereas CML LSCs show higher TCA cycle flux and mitochondrial respiration than their normal HSCs counterparts, LSCs rely more on OXPHOS, in keeping with their glycolytic metabolic profile 34 . Therefore, downregulation of OXPHOS genes via PARP inhibition may contribute to eradication of BCR/ABL1-positive LSCs. LSCs accumulate high levels of reactive oxygen species (ROS) and oxidative DNA damage 35 . Thus, HSCs become exhausted. HSCs from Atm/Foxo knockout mice, or from other mouse models defective in DNA repair, exhibit premature exhaustion due to accumulation of ROS and/or DNA damage 36 www.nature.com/scientificreports/ embryos 29 . Farees et al. report that Parp2 −/− mice exposed to sublethal doses of irradiation exhibit bone marrow failure, which correlates with reduced long-term repopulation of irradiated Parp2 −/− HSCs under competitive conditions 38 . In addition, Li et al. reported that activation of PARP1 by salidroside protects quiescent HSCs from oxidative stress-induced cycling and self-renewal defects, both of which are abrogated by genetic ablation or pharmacologic inhibition of PARP1 39 . Expression of BCR/ABL1 augments DNA damage and/or increases ROS production in HSCs. Therefore, under conditions of PARP knockout or PARP inhibition, these cells may easily become exhausted. Another explanation is that HSCs in vivo are in a quiescent state and only proliferate when stimulated. Here, we found no significant difference in the amount of DNA damage in HSCs from Parp1 −/− BA Tg/− and Parp1 +/+ BA Tg/− mice; however, we observed increased genomic instability in proliferating HSCs from Parp1 −/− BA Tg/− mice. Thus, PARP inhibition may accelerate exhaustion of replicating BCR/ABL1positive HSCs. Historically, IFN has been used as a first-line therapy for patients with chronic-phase CML who are not eligible for allogeneic stem cell transplantation; this was the case until introduction of the potent BCR/ABL tyrosine kinase inhibitor imatinib mesylate. IFN can activate the immune system to target and eradicate CML stem cells. A subset of HSCs is highly quiescent 40 . Thus, the effects of PARPi (i.e., creation of DSBs by inhibiting SSBs) may be limited in these cells. Activation of the cGAS/STING pathway may explain the effects of PARPi on BCR/ABL1-positive LSCs. TKIs remain the gold standard treatment for CML and Ph1-ALL. A previous study shows that combination of a TKI with PARPi increases the antileukemic effect against BCR/ABL1-positive cells 27 . TKIs have markedly improved the outcome of patients with CML. However, only 40-60% of patients with CML that shows a deep Material and methods Cells and cell culture. KOPN30, BV173, and K562 are BCR/ABL1-positive leukemia cell lines. All leukemia cell lines, as well as Ba/F3 cells, were maintained in RPMI-1640 medium supplemented with 15% fetal bovine serum (FBS) and penicillin-streptomycin (100 U/mL) at 37 °C in an atmosphere containing 5% CO 2 . KOPN30 cells were obtained from the University of Yamanashi School of Medicine (Yamanashi, Japan). BV173 and Ba/F3 cells were obtained from DSMZ (Braunschweig, Germany). K562 cells were obtained from the JCRB cell Bank (Osaka, Japan). Rat-1 cells were obtained from RIKEN cell bank (Tsukuba, Japan). All cell lines were tested for mycoplasma contamination. Rat-1 cells and the fibroblast line MRC5SV harboring a single integrated copy of DR-GFP (DR-GFP MRC5SV) were maintained in Dulbecco's Modified Eagle's Medium (DMEM) supplemented with 10% FBS and penicillin-streptomycin (100 units/mL) at 37 °C in an atmosphere containing 5% CO 2 . DR-GFP assay. The BCR/ABL1-expressing plasmid was constructed by subcloning BCR/ABL1into the MSCV plasmid. The DR-GFP assay was performed as previously described 9 . Briefly, mock or BCR/ABL1expressing plasmids were transiently transfected into single-copy DR-GFP-integrated MRC5SV cells using X-tremeGENE 9 (Roche, Basel, Switzerland). On the next day, cells were transfected with the I-SceI expression vector pCBAS. GFP expression was monitored by flow cytometry 48 h after transfecting cells with pCBAS. Cell death analyses. The percentage of apoptotic cells was measured by flow cytometry after staining with a combination of Annexin V (Abcam, Cambridge, MA) and propidium iodide 10 . Growth-inhibitory effects were assessed using a Cell Counting Kit (Dojindo, Kumamoto, Japan). A combination index, used to assess synergistic effects, was calculated using CompuSyn software 11 . Rat-1 cell transformation assay. A BCR/ABL1-mediated transformation assay using Rat-1 cells was performed as previously described 12 .Colony number was counted on day 21. Colony-forming activity was also measured using a CytoSelect™ 96 well Cell Transformation Assay Kit (Cell Biolabs, San Diego, CA). Breakage-fusion-bridge (BFB) formation assay. Detection of nucleoplasmic bridges was used to assess BFB frequency; the assay was optimized for mouse cells as described previously 13 . Briefly, mouse HSCs were isolated using immunomagnetic columns as described above (Miltenyi Biotech) and then cultured in αMEM supplemented with 20% FCS, 50 ng/mL mouse SCF, 50 ng/mL mouse FLT3 ligand, 50 ng/mL human IL-6, and 50 ng/mL human TPO. Next, cells were exposed to 2 Gy X-ray irradiation and cultured for 48 h, followed by addition of cytochalasin-D (0.6 μg/mL) for 24 h. Then, cells were released from cytochalasin-D treatment for 2 h and exposed to cold hypotonic (0.075 M KCl) solution. Finally, cells were fixed in Carnoy fluid, dropped onto slides, stained with DAPI, and examined using a fluorescent microscope at a magnification of × 400. Immunoprecipitation and western blotting. Cells Hematopoietic stem cell (HSC) transplantation and transduction of BCR/ABL1. The BCR/ ABL1-expressing plasmid was constructed by subcloning BCR/ABL1 into the MSCV-IRES-GFP plasmid. Plat-E cells 20 , an ecotropic packaging cell line, were transfected with MSCV-BCR/ABL1-IRES-GFP using polyethyleneimine. Supernatants containing high titers of retrovirus were collected at 48 and 72 h and concentrated using a Retro-X Concentrator (TAKARA-Clontech, Ohtsu, Japan). LTR-HSCs were cultured overnight in αMEM supplemented with 20% FBS plus 50 ng/mL each of mouse stem cell factor (SCF), human IL-6, human FLT3 ligand, and human thrombopoietin (TPO). On Day 2, cells were placed in 24-well dishes coated with RetroNectin (TAKARA-Clontech, Shiga, Japan) and infected with concentrated retrovirus particles. At 60 h postinfection, retrovirus-infected LTR-HSCs were transplanted into mice that had received (6 h earlier) myeloablative conditioning with 9.5 Gy total body irradiation. Mice were allowed access (ad libitum) to water containing 1 mg/mL neomycin trisulfate salt hydrate and 100 U/mL polymyxin B sulfate salt. Statistical analysis. P-values for the DR-GFP, apoptosis, cell survival, transformation, leukemic stem cell detection, and qPCR assays were calculated using a t test. Survival curves were constructed using the Kaplan-Meier method and analyzed using the log-rank test. All statistical tests were two-sided, and a p-value of < 0.05 was considered significant.
v3-fos-license
2020-08-06T09:03:14.931Z
2020-07-29T00:00:00.000
231827692
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://doi.org/10.21203/rs.3.rs-50275/v1", "pdf_hash": "3ef6e851ea22499d97d5b2ac9806e2b7f8147ed0", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41790", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9766a249e23054d3c249ae4d362de859b120972c", "year": 2020 }
pes2o/s2orc
Patients with COVID-19 Interstitial Pneumonia Exhibit Pancreatic Hyperenzymemia and Not Acute Pancreatitis Raffaele Pezzilli Department of Gastroenterology, San Carlo Hospital Stefano Centanni Respiratory Unit, ASST Santi Paolo e Carlo, San Paolo Hospital, Department of Health Sciences, Università degli Studi di Milano Michele Mondoni Respiratory Unit, ASST Santi Paolo e Carlo, San Paolo Hospital Rocco F. Rinaldo Respiratory Unit, ASST Santi Paolo e Carlo, San Paolo Hospital, Department of Health Sciences, Università degli Studi di Milano Matteo Davì Respiratory Unit, ASST Santi Paolo e Carlo, San Paolo Hospital Rossana Stefanelli Laboratory of Clinical Chemistry, ASST Santi Paolo e Carlo, San Paolo Hospital GianVico Melzi d’Eril Università degli Studi di Milano Alessandra Barassi (  alessandra.barassi@unimi.it ) Laboratory of Clinical Chemistry, ASST Santi Paolo e Carlo, San Paolo Hospital, Department of Health Sciences, Università degli Studi di Milano Introduction Introduction Novel coronavirus (SARS-CoV-2)-infected pneumonia was originally reported to be associated with exposure to the seafood market in Wuhan; it then spread to more than 100 countries and led to tens of thousands of cases within a few months [1]. On March 11 th 2020, the World Health Organization (WHO) o cially declared the outbreak of COVID-19 to be a pandemic [2]. Gastrointestinal manifestations of COVID-19 have been well established [3]; COVID-19 also involves the liver [4]. Acute pancreatic involvement in patients with COVID-19 has recently been reported; acute pancreatitis was de ned on the basis of an elevation of serum pancreatic enzymes [5]. Con rmed cases of acute pancreatitis in COVID-19 patients have been anecdotally reported in two of three family members with the remaining one having only hyperamylasemia [6]. The present study was undertaken to assess the frequency of acute pancreatitis in consecutive patients affected by COVID-19. Methods The study was carried out in the Respiratory Unit, ASST Santi Paolo e Carlo, San Paolo Hospital, Milan, Italy and it was approved by local Ethical Committee Milano Area 1 (n. 2020/ST/057) with a priori patient or appropriate proxy consent obtained prior to the participants' entry into the study which was then carried out in accordance with the Helsinki Declaration of the World Medical Association. Acute pancreatitis was de ned as the presence of prolonged typical pancreatic pain associated with the ndings of pancreatic abnormalities at imaging techniques and a three-fold increase in serum amylase and lipase activity [7]. Our patients did not required orotracheal intubation and sedation as those in Intensive Care Unit that are severely ill [8] because we had previously demonstrated that serum pancreatic enzymes could be elevated in those patients [9]. The inclusion criteria were age equal to or greater than 18 years and novel coronavirus infection con rmed by real-time polymerase chain reaction (PCR) and having been diagnosed as having COVID-19 according to the World Health Organization (WHO) interim guidelines [10] plus chest computed tomography demonstrating lung involvement. Both genders were included. Serum samples were obtained from all subjects at their initial observation; they were kept frozen at -20°C until analysis. Patients From April 1 st 2020 to April 30 th 2020, 110 consecutive patients (69 males, 41 females; mean age 63.0 years, range 24-93 years) met these criteria and were enrolled in the study. The average time from the onset of respiratory symptoms to the blood samples was 22.2 days (range 0-47). Novel Coronavirus (SARS-CoV-2) antibodies (Immunoglobulin [Ig] M/IgG) were also evaluated using the commercially available kit BioMedomics IgM-IgG Combined Antibody Rapid Test (Morrisville NC, USA). It is one of the world's rst rapid point-of-care lateral ow immunoassays for the diagnosis of coronavirus. The test has been used widely by the Chinese Center for Disease Control to combat COVID-19 infections and is now being made available globally. This newly developed test kit, the IgG-IgM combined antibody test kit, has a sensitivity of 88.66% and a speci city of 90.63% [11]. All patients were treated according to current therapeutic modalities [12]; regarding ventilation support, seven patients (6.4%) did not receive any ventilator support, 42 (38.2%) received oxygen via nasal cannula, oxygen mask or an oxygen mask with a reservoir, 41 (37.3%) were on a continuous positive airway pressure device (CPAP) and 20 (18.2%) on noninvasive mechanical ventilation (NIMV). All authors had access to the study data and reviewed and approved the nal manuscript. Serum assays Serum pancreatic amylase and serum lipase were assayed using commercially available kits. Pancreatic amylase was assayed using pancreatic isoamylase (Sentinel Ch. S.p.A., Milan, Italy). The linearity of the method was 4.0 to 2000 U/L; within-run coe cient of variation (CV) was 0.3 to 0.7% and total imprecision CV was 3.0 to 5.7%. The upper reference limit (URL) of pancreatic amylase was 53 U/L. Lipase was assayed using VITROS Chemistry Products LIPA Slides (Ortho-Clinical Diagnostics, High Wycombe, United Kingdom). The URL of lipase was 300 U/L. The linearity of the method was 10 to 2000 U/L; within-run CV was 1.1 to 6.1% and total imprecision CV was 1.8 to 12.2%. Total bilirubin (upper reference value equal to 1.30 mg/dL), direct bilirubin (upper reference value equal to 0.3 mg/dL), alanine aminotrasferase (ALT) (upper reference value equal to 35 U/L), aspartate aminotransferase (AST) (upper reference value equal to 36 U/L), gamma-GT (GGT) (upper reference value equal to 58 U/L) and C-reactive protein (CRP) (upper reference value equal to 10 mg/L) were assayed using commercially available kits (Ortho-Clinical Diagnostics, High Wycombe, United Kingdom). Statistical analysis No statistical sample size calculation was carried out a priori, and the sample size was equal to the number of patients treated during the study period. Data were reported as mean values ± standard deviations for the continuous variables and as absolute number and percentage for the categorical variables. The Kolmogorov-Smirnov test was used to evaluate the normal distribution of the blood parameters. Statistical analyses were carried out using the Mann-Whitney U-test, the Spearman rank correlation and the chi squared test The statistical analyses were carried out by running the SPSS/PC+ statistical package (SPSS Inc., ver. 23.0 Chicago, IL, USA) on a personal computer. Two-tailed P values less than 0.05 were considered to be statistically signi cant. Results There were no differences in age between male (62.1±15. Regarding the oxygen support, a statistical difference was found in amylase serum activities among the various types of oxygen support used (amylase P = 0.047) whereas this difference was not found for lipase (P = 0.065) ( Table 3). Discussion The gastrointestinal manifestation of COVID-19 patients is well known; this study found that 12.7% of patients had diarrhea and 2.7 % had nausea/vomiting; these gures are similar to those reported by Wang et al. [13]. Nausea, vomiting and abdominal discomfort may also appear during the course of the disease [14]. The reason why (SARS-CoV-2)infected pneumonia may involve the gastrointestinal tract is probably due to the fact that the virus has been found more commonly in he saliva [15] but has also been found in the feces in 29% of patients [16]. However, more recently, it has been reported that SARS-CoV-2 infection may also cause acute pancreatic damage [17]. The Authors found, in a retrospective study, that 17% of patients experienced a pancreatic injury. This nding was supported only by the elevation of serum amylase and lipase. In addition, con rmed cases of acute pancreatitis in COVID-19 patients have been anecdotally reported in two of three family members with the other one having only hyperamylasemia [6]. The present study was undertaken for this reason. Acute pancreatitis was de ned according to accepted International criteria [7] and the Authors found that none of their patients had an episode of acute pancreatitis as de ned by the presence of persistent abdominal pain associated with a 5-fold increase in serum pancreatic enzymes, and imaging showing acute alterations of the pancreatic gland. On the contrary, the Authors found that COVID-19 patients could have an increase in serum pancreatic enzymes, such as amylase (24.5% of the cases) and lipase (16.4% of the cases), and only one had a three-fold increase in this enzyme without pain and alteration of the pancreatic gland at imaging. The elevations of the serum pancreatic enzymes were not related to either the presence of diarrhea or nausea/vomiting, or to the IgM and IgG status. Why this happened requires additional studies. similarly to COVID-19, the presence of pancreatic hyperenzymemia in other viral diseases has been reported; the Authors found that, in 78 patients with chronic liver diseases due to hepatitis C virus (HCV) or hepatitis b virus (HBV) infection, the serum amylase levels were abnormally elevated in 27 patients (35%; 22 liver cirrhosis, 5 chronic active hepatitis) whereas the serum lipase levels were elevated in only 16 patients (21%; 15 liver cirrhosis, 1 chronic active hepatitis) [18]; this also happens in mumps and HIV [19,20]. Pancreatic hyperenzymemia in these patients could result from various causes; pancreatic cells highly express angiotensin converting enzyme 2, the transmembrane protein required for SARS-CoV-2 entry [21], and the pancreatic renin-angiotensin system plays important endocrine and exocrine roles in hormone secretion [22] which could be the effect of drugs used for antiviral therapy. Even if no studies regarding the pancreas of COVID-19 patients have been found, a recent pathological report from China found that, even if the damage was located predominantly in the lungs, there were slight alterations in the pancreas, mainly represented by degeneration of some islet cells [23]. The data in the present study regarding liver function tests con rmed those previously reported [24] and the Authors suggest that, in patients with elevated liver function tests who have suspected or known COVID-19, it is necessary to also consider alternative etiologies. In hospitalized patients, it is also useful to obtain these tests at the time of admission and throughout the hospitalization, particularly in the context of COVID-19 drug treatment. Finally, this study used a new test for evaluating the immunological response to novel coronavirus infection. It was found that this assay, considering only strong positivity for the two classes of immunoglobulins, had a sensitivity of 73.4% for IgM and 87.3% for IgG. These preliminary data are interesting but should be con rmed by studies involving a larger number of patients. In conclusion, during COVID-19, serum amylase is more frequently elevated than serum lipase, but none of the patients with pancreatic hyperenzymemia showed acute clinical pancreatic injury. The presence of pancreatic hyperenzymemia in a patient with COVID-19 requires the management of these patients be guided by clinical evaluation and not merely by evaluation of the biochemical results. Declarations Funding: None Con icts of interest: the authors disclose no con icts Author Contributions: Tables Table 1. Values of the various biochemical assays in the 110 patients studied. Results are reported as mean, standard deviation (SD) and range.
v3-fos-license